SparkSession (Spark 4.2.0 JavaDoc)
Method Details
-
builder
- Inheritdoc:
-
setActiveSession
public static void setActiveSession
(SparkSession session) - Inheritdoc:
-
setDefaultSession
public static void setDefaultSession
(SparkSession session) - Inheritdoc:
-
getActiveSession
public static scala.Option<SparkSession> getActiveSession()
- Inheritdoc:
-
getDefaultSession
public static scala.Option<SparkSession> getDefaultSession()
- Inheritdoc:
-
clearActiveSession
public static void clearActiveSession()
-
clearDefaultSession
public static void clearDefaultSession()
-
active
public static org.apache.spark.sql.SparkSessionCompanion active()
-
addArtifacts
public void addArtifacts
(URI... uri) Add one or more artifacts to the session.
Currently it supports local files with extensions .jar and .class and Apache Ivy URIs
- Parameters:
uri- (undocumented)- Since:
- 4.0.0
-
sparkContext
The Spark context associated with this Spark session.
- Returns:
- (undocumented)
- Note:
- this is only supported in Classic.
-
version
public abstract String version()
The version of Spark on which this application is running.
- Returns:
- (undocumented)
- Since:
- 2.0.0
-
sessionState
public abstract org.apache.spark.sql.internal.SessionState sessionState()
State isolated across sessions, including SQL configurations, temporary tables, registered functions, and everything else that accepts a
org.apache.spark.sql.internal.SQLConf. IfparentSessionStateis not null, theSessionStatewill be a copy of the parent.This is internal to Spark and there is no guarantee on interface stability.
- Returns:
- (undocumented)
- Since:
- 2.2.0
- Note:
- this is only supported in Classic.
-
sqlContext
A wrapped version of this session in the form of a
SQLContext, for backward compatibility.- Returns:
- (undocumented)
- Since:
- 2.0.0
-
conf
Runtime configuration interface for Spark.
This is the interface through which the user can get and set all Spark and Hadoop configurations that are relevant to Spark SQL. When getting the value of a config, this defaults to the value set in the underlying
SparkContext, if any.- Returns:
- (undocumented)
- Since:
- 2.0.0
-
listenerManager
An interface to register custom
org.apache.spark.sql.util.QueryExecutionListenersthat listen for execution metrics.- Returns:
- (undocumented)
- Since:
- 2.0.0
- Note:
- this is only supported in Classic.
-
experimental
:: Experimental :: A collection of methods that are considered experimental, but can be used to hook into the query planner for advanced functionality.
- Returns:
- (undocumented)
- Since:
- 2.0.0
- Note:
- this is only supported in Classic.
-
udf
A collection of methods for registering user-defined functions (UDF).
The following example registers a Scala closure as UDF:
sparkSession.udf.register("myUDF", (arg1: Int, arg2: String) => arg2 + arg1)The following example registers a UDF in Java:
sparkSession.udf().register("myUDF", (Integer arg1, String arg2) -> arg2 + arg1, DataTypes.StringType);- Returns:
- (undocumented)
- Since:
- 2.0.0
- Note:
- The user-defined functions must be deterministic. Due to optimization, duplicate invocations may be eliminated or the function may even be invoked more times than it is present in the query.
-
streams
Returns a
StreamingQueryManagerthat allows managing all theStreamingQuerys active onthis.- Returns:
- (undocumented)
- Since:
- 2.0.0
-
newSession
Start a new session with isolated SQL configurations, temporary tables, registered functions are isolated, but sharing the underlying
SparkContextand cached data.- Returns:
- (undocumented)
- Since:
- 2.0.0
- Note:
- Other than the
SparkContext, all shared state is initialized lazily. This method will force the initialization of the shared state to ensure that parent and child sessions are set up with the same shared state. If the underlying catalog implementation is Hive, this will initialize the metastore, which may take some time.
-
emptyDataFrame
public abstract Dataset<Row> emptyDataFrame()
Returns a
DataFramewith no rows or columns.- Returns:
- (undocumented)
- Since:
- 2.0.0
-
emptyDataFrame
Returns a
DataFramewith schemaschemaand no rows.- Parameters:
schema- (undocumented)- Returns:
- (undocumented)
- Since:
- 4.2.0
-
createDataFrame
public abstract <A extends scala.Product> Dataset<Row> createDataFrame
(scala.collection.immutable.Seq<A> data, scala.reflect.api.TypeTags.TypeTag<A> evidence$1) Creates a
DataFramefrom a local Seq of Product.- Parameters:
data- (undocumented)evidence$1- (undocumented)- Returns:
- (undocumented)
- Since:
- 2.0.0
-
createDataFrame
:: DeveloperApi :: Creates a
DataFramefrom ajava.util.ListcontainingRows using the given schema.It is important to make sure that the structure of everyRowof the provided List matches the provided schema. Otherwise, there will be runtime exception.- Parameters:
rows- (undocumented)schema- (undocumented)- Returns:
- (undocumented)
- Since:
- 2.0.0
-
createDataFrame
public abstract Dataset<Row> createDataFrame
(List<?> data, Class<?> beanClass) Applies a schema to a List of Java Beans.
WARNING: Since there is no guaranteed ordering for fields in a Java Bean, SELECT * queries will return the columns in an undefined order.
- Parameters:
data- (undocumented)beanClass- (undocumented)- Returns:
- (undocumented)
- Since:
- 1.6.0
-
createDataFrame
public abstract <A extends scala.Product> Dataset<Row> createDataFrame
(RDD<A> rdd, scala.reflect.api.TypeTags.TypeTag<A> evidence$2) Creates a
DataFramefrom an RDD of Product (e.g. case classes, tuples).- Parameters:
rdd- (undocumented)evidence$2- (undocumented)- Returns:
- (undocumented)
- Since:
- 2.0.0
- Note:
- this is only supported in Classic.
-
createDataFrame
:: DeveloperApi :: Creates a
DataFramefrom anRDDcontainingRows using the given schema. It is important to make sure that the structure of everyRowof the provided RDD matches the provided schema. Otherwise, there will be runtime exception. Example:import org.apache.spark.sql._ import org.apache.spark.sql.types._ val sparkSession = new org.apache.spark.sql.SparkSession(sc) val schema = StructType( StructField("name", StringType, false) :: StructField("age", IntegerType, true) :: Nil) val people = sc.textFile("examples/src/main/resources/people.txt").map( _.split(",")).map(p => Row(p(0), p(1).trim.toInt)) val dataFrame = sparkSession.createDataFrame(people, schema) dataFrame.printSchema // root // |-- name: string (nullable = false) // |-- age: integer (nullable = true) dataFrame.createOrReplaceTempView("people") sparkSession.sql("select name from people").collect.foreach(println)- Parameters:
rowRDD- (undocumented)schema- (undocumented)- Returns:
- (undocumented)
- Since:
- 2.0.0
- Note:
- this is only supported in Classic.
-
createDataFrame
:: DeveloperApi :: Creates a
DataFramefrom aJavaRDDcontainingRows using the given schema. It is important to make sure that the structure of everyRowof the provided RDD matches the provided schema. Otherwise, there will be runtime exception.- Parameters:
rowRDD- (undocumented)schema- (undocumented)- Returns:
- (undocumented)
- Since:
- 2.0.0
- Note:
- this is only supported in Classic.
-
createDataFrame
public abstract Dataset<Row> createDataFrame
(RDD<?> rdd, Class<?> beanClass) Applies a schema to an RDD of Java Beans.
WARNING: Since there is no guaranteed ordering for fields in a Java Bean, SELECT * queries will return the columns in an undefined order.
- Parameters:
rdd- (undocumented)beanClass- (undocumented)- Returns:
- (undocumented)
- Since:
- 2.0.0
- Note:
- this is only supported in Classic.
-
createDataFrame
Applies a schema to an RDD of Java Beans.
WARNING: Since there is no guaranteed ordering for fields in a Java Bean, SELECT * queries will return the columns in an undefined order.
- Parameters:
rdd- (undocumented)beanClass- (undocumented)- Returns:
- (undocumented)
- Since:
- 2.0.0
- Note:
- this is only supported in Classic.
-
baseRelationToDataFrame
Convert a
BaseRelationcreated for external data sources into aDataFrame.- Parameters:
baseRelation- (undocumented)- Returns:
- (undocumented)
- Since:
- 2.0.0
- Note:
- this is only supported in Classic.
-
emptyDataset
public abstract <T> Dataset<T> emptyDataset
(Encoder<T> evidence$3) Creates a new
Datasetof type T containing zero elements.- Parameters:
evidence$3- (undocumented)- Returns:
- (undocumented)
- Since:
- 2.0.0
-
createDataset
public abstract <T> Dataset<T> createDataset
(scala.collection.immutable.Seq<T> data, Encoder<T> evidence$4) Creates a
Datasetfrom a local Seq of data of a given type. This method requires an encoder (to convert a JVM object of typeTto and from the internal Spark SQL representation) that is generally created automatically through implicits from aSparkSession, or can be created explicitly by calling static methods onEncoders.==Example==
import spark.implicits._ case class Person(name: String, age: Long) val data = Seq(Person("Michael", 29), Person("Andy", 30), Person("Justin", 19)) val ds = spark.createDataset(data) ds.show() // +-------+---+ // | name|age| // +-------+---+ // |Michael| 29| // | Andy| 30| // | Justin| 19| // +-------+---+- Parameters:
data- (undocumented)evidence$4- (undocumented)- Returns:
- (undocumented)
- Since:
- 2.0.0
-
createDataset
public abstract <T> Dataset<T> createDataset
(List<T> data, Encoder<T> evidence$5) Creates a
Datasetfrom ajava.util.Listof a given type. This method requires an encoder (to convert a JVM object of typeTto and from the internal Spark SQL representation) that is generally created automatically through implicits from aSparkSession, or can be created explicitly by calling static methods onEncoders.==Java Example==
List<String> data = Arrays.asList("hello", "world"); Dataset<String> ds = spark.createDataset(data, Encoders.STRING());- Parameters:
data- (undocumented)evidence$5- (undocumented)- Returns:
- (undocumented)
- Since:
- 2.0.0
-
createDataset
public abstract <T> Dataset<T> createDataset
(RDD<T> data, Encoder<T> evidence$6) Creates a
Datasetfrom an RDD of a given type. This method requires an encoder (to convert a JVM object of typeTto and from the internal Spark SQL representation) that is generally created automatically through implicits from aSparkSession, or can be created explicitly by calling static methods onEncoders.- Parameters:
data- (undocumented)evidence$6- (undocumented)- Returns:
- (undocumented)
- Since:
- 2.0.0
- Note:
- this method is not supported in Spark Connect.
-
range
public abstract Dataset<Long> range
(long end) Creates a
Datasetwith a singleLongTypecolumn namedid, containing elements in a range from 0 toend(exclusive) with step value 1.- Parameters:
end- (undocumented)- Returns:
- (undocumented)
- Since:
- 2.0.0
-
range
public abstract Dataset<Long> range
(long start, long end) Creates a
Datasetwith a singleLongTypecolumn namedid, containing elements in a range fromstarttoend(exclusive) with step value 1.- Parameters:
start- (undocumented)end- (undocumented)- Returns:
- (undocumented)
- Since:
- 2.0.0
-
range
public abstract Dataset<Long> range
(long start, long end, long step) Creates a
Datasetwith a singleLongTypecolumn namedid, containing elements in a range fromstarttoend(exclusive) with a step value.- Parameters:
start- (undocumented)end- (undocumented)step- (undocumented)- Returns:
- (undocumented)
- Since:
- 2.0.0
-
range
public abstract Dataset<Long> range
(long start, long end, long step, int numPartitions) Creates a
Datasetwith a singleLongTypecolumn namedid, containing elements in a range fromstarttoend(exclusive) with a step value, with partition number specified.- Parameters:
start- (undocumented)end- (undocumented)step- (undocumented)numPartitions- (undocumented)- Returns:
- (undocumented)
- Since:
- 2.0.0
-
catalog
public abstract Catalog catalog()
Interface through which the user may create, drop, alter or query underlying databases, tables, functions etc.
- Returns:
- (undocumented)
- Since:
- 2.0.0
-
table
Returns the specified table/view as a
DataFrame. If it's a table, it must support batch reading and the returned DataFrame is the batch scan query plan of this table. If it's a view, the returned DataFrame is simply the query plan of the view, which can either be a batch or streaming query plan.- Parameters:
tableName- is either a qualified or unqualified name that designates a table or view. If a database is specified, it identifies the table/view from the database. Otherwise, it first attempts to find a temporary view with the given name and then match the table/view from the current database. Note that, the global temporary view database is also valid here.- Returns:
- (undocumented)
- Since:
- 2.0.0
-
sql
Executes a SQL query substituting positional parameters by the given arguments, returning the result as a
DataFrame. This API eagerly runs DDL/DML commands, but not for SELECT queries.- Parameters:
sqlText- A SQL statement with positional parameters to execute.args- An array of Java/Scala objects that can be converted to SQL literal expressions. See Supported Data Types for supported value types in Scala/Java. For example, 1, "Steven", LocalDate.of(2023, 4, 2). A value can be also aColumnof a literal or collection constructor functions such asmap(),array(),struct(), in that case it is taken as is.- Returns:
- (undocumented)
- Since:
- 3.5.0
-
sql
Executes a SQL query substituting named parameters by the given arguments, returning the result as a
DataFrame. This API eagerly runs DDL/DML commands, but not for SELECT queries.- Parameters:
sqlText- A SQL statement with named parameters to execute.args- A map of parameter names to Java/Scala objects that can be converted to SQL literal expressions. See Supported Data Types for supported value types in Scala/Java. For example, map keys: "rank", "name", "birthdate"; map values: 1, "Steven", LocalDate.of(2023, 4, 2). Map value can be also aColumnof a literal or collection constructor functions such asmap(),array(),struct(), in that case it is taken as is.- Returns:
- (undocumented)
- Since:
- 3.4.0
-
sql
Executes a SQL query substituting named parameters by the given arguments, returning the result as a
DataFrame. This API eagerly runs DDL/DML commands, but not for SELECT queries.- Parameters:
sqlText- A SQL statement with named parameters to execute.args- A map of parameter names to Java/Scala objects that can be converted to SQL literal expressions. See Supported Data Types for supported value types in Scala/Java. For example, map keys: "rank", "name", "birthdate"; map values: 1, "Steven", LocalDate.of(2023, 4, 2). Map value can be also aColumnof a literal or collection constructor functions such asmap(),array(),struct(), in that case it is taken as is.- Returns:
- (undocumented)
- Since:
- 3.4.0
-
sql
Executes a SQL query using Spark, returning the result as a
DataFrame. This API eagerly runs DDL/DML commands, but not for SELECT queries.- Parameters:
sqlText- (undocumented)- Returns:
- (undocumented)
- Since:
- 2.0.0
-
executeCommand
Execute an arbitrary string command inside an external execution engine rather than Spark. This could be useful when user wants to execute some commands out of Spark. For example, executing custom DDL/DML command for JDBC, creating index for ElasticSearch, creating cores for Solr and so on.
The command will be eagerly executed after this method is called and the returned DataFrame will contain the output of the command(if any).
- Parameters:
runner- The class name of the runner that implementsExternalCommandRunner.command- The target command to be executedoptions- The options for the runner.- Returns:
- (undocumented)
- Since:
- 3.0.0
-
addArtifact
public abstract void addArtifact
(String path) Add a single artifact to the current session.
Currently only local files with extensions .jar and .class are supported.
- Parameters:
path- (undocumented)- Since:
- 4.0.0
-
addArtifact
public abstract void addArtifact
(URI uri) Add a single artifact to the current session.
Currently it supports local files with extensions .jar and .class and Apache Ivy URIs.
- Parameters:
uri- (undocumented)- Since:
- 4.0.0
-
addArtifact
public abstract void addArtifact
(byte[] bytes, String target) Add a single in-memory artifact to the session while preserving the directory structure specified by
targetunder the session's working directory of that particular file extension.Supported target file extensions are .jar and .class.
==Example==
addArtifact(bytesBar, "foo/bar.class") addArtifact(bytesFlat, "flat.class") // Directory structure of the session's working directory for class files would look like: // ${WORKING_DIR_FOR_CLASS_FILES}/flat.class // ${WORKING_DIR_FOR_CLASS_FILES}/foo/bar.class- Parameters:
bytes- (undocumented)target- (undocumented)- Since:
- 4.0.0
-
addArtifact
public abstract void addArtifact
(String source, String target) Add a single artifact to the session while preserving the directory structure specified by
targetunder the session's working directory of that particular file extension.Supported target file extensions are .jar and .class.
==Example==
addArtifact("/Users/dummyUser/files/foo/bar.class", "foo/bar.class") addArtifact("/Users/dummyUser/files/flat.class", "flat.class") // Directory structure of the session's working directory for class files would look like: // ${WORKING_DIR_FOR_CLASS_FILES}/flat.class // ${WORKING_DIR_FOR_CLASS_FILES}/foo/bar.class- Parameters:
source- (undocumented)target- (undocumented)- Since:
- 4.0.0
-
addArtifacts
public abstract void addArtifacts
(scala.collection.immutable.Seq<URI> uri) Add one or more artifacts to the session.
Currently it supports local files with extensions .jar and .class and Apache Ivy URIs
- Parameters:
uri- (undocumented)- Since:
- 4.0.0
-
addTag
public abstract void addTag
(String tag) Add a tag to be assigned to all the operations started by this thread in this session.
Often, a unit of execution in an application consists of multiple Spark executions. Application programmers can use this method to group all those jobs together and give a group tag. The application can use
org.apache.spark.sql.SparkSession.interruptTagto cancel all running executions with this tag. For example:// In the main thread: spark.addTag("myjobs") spark.range(10).map(i => { Thread.sleep(10); i }).collect() // In a separate thread: spark.interruptTag("myjobs")There may be multiple tags present at the same time, so different parts of application may use different tags to perform cancellation at different levels of granularity.
- Parameters:
tag- The tag to be added. Cannot contain ',' (comma) character or be an empty string.- Since:
- 4.0.0
-
removeTag
public abstract void removeTag
(String tag) Remove a tag previously added to be assigned to all the operations started by this thread in this session. Noop if such a tag was not added earlier.
- Parameters:
tag- The tag to be removed. Cannot contain ',' (comma) character or be an empty string.- Since:
- 4.0.0
-
getTags
public abstract scala.collection.immutable.Set<String> getTags()
Get the operation tags that are currently set to be assigned to all the operations started by this thread in this session.
- Returns:
- (undocumented)
- Since:
- 4.0.0
-
clearTags
public abstract void clearTags()
Clear the current thread's operation tags.
- Since:
- 4.0.0
-
interruptAll
public abstract scala.collection.immutable.Seq<String> interruptAll()
Request to interrupt all currently running operations of this session.
- Returns:
- Sequence of operation IDs requested to be interrupted.
- Since:
- 4.0.0
- Note:
- This method will wait up to 60 seconds for the interruption request to be issued.
-
interruptTag
public abstract scala.collection.immutable.Seq<String> interruptTag
(String tag) Request to interrupt all currently running operations of this session with the given job tag.
- Parameters:
tag- (undocumented)- Returns:
- Sequence of operation IDs requested to be interrupted.
- Since:
- 4.0.0
- Note:
- This method will wait up to 60 seconds for the interruption request to be issued.
-
interruptOperation
public abstract scala.collection.immutable.Seq<String> interruptOperation
(String operationId) Request to interrupt an operation of this session, given its operation ID.
- Parameters:
operationId- (undocumented)- Returns:
- The operation ID requested to be interrupted, as a single-element sequence, or an empty sequence if the operation is not started by this session.
- Since:
- 4.0.0
- Note:
- This method will wait up to 60 seconds for the interruption request to be issued.
-
read
Returns a
DataFrameReaderthat can be used to read non-streaming data in as aDataFrame.sparkSession.read.parquet("/path/to/file.parquet") sparkSession.read.schema(schema).json("/path/to/file.json")- Returns:
- (undocumented)
- Since:
- 2.0.0
-
readStream
Returns a
DataStreamReaderthat can be used to read streaming data in as aDataFrame.sparkSession.readStream.parquet("/path/to/directory/of/parquet/files") sparkSession.readStream.schema(schema).json("/path/to/directory/of/json/files")- Returns:
- (undocumented)
- Since:
- 2.0.0
-
tvf
- Returns:
- (undocumented)
- Since:
- 4.0.0
-
implicits
(Scala-specific) Implicit methods available in Scala for converting common Scala objects into
DataFrames.val sparkSession = SparkSession.builder.getOrCreate() import sparkSession.implicits._- Returns:
- (undocumented)
- Since:
- 2.0.0
-
time
public <T> T time
(scala.Function0<T> f) Executes some code block and prints to stdout the time taken to execute the block. This is available in Scala only and is used primarily for interactive testing and debugging.
- Parameters:
f- (undocumented)- Returns:
- (undocumented)
- Since:
- 2.1.0
-
stop
public void stop()
- Since:
- 2.0.0
-
withActive
public <T> T withActive
(scala.Function0<T> block) Execute a block of code with this session set as the active session, and restore the previous session on completion.
- Parameters:
block- (undocumented)- Returns:
- (undocumented)