Object

org.apache.spark.sql.SQLContext

All Implemented Interfaces:
Serializable, org.apache.spark.internal.Logging

public abstract class SQLContext extends Object implements org.apache.spark.internal.Logging, Serializable

The entry point for working with structured data (rows and columns) in Spark 1.x.

As of Spark 2.0, this is replaced by SparkSession. However, we are keeping the class here for backward compatibility.

Since:
1.0.0
See Also:
  • Nested Class Summary

    Nested classes/interfaces inherited from interface org.apache.spark.internal.Logging

    org.apache.spark.internal.Logging.LogStringContext, org.apache.spark.internal.Logging.SparkShellLoggingFilter

  • Method Summary

    Convert a BaseRelation created for external data sources into a DataFrame.

    void

    Caches the specified table in-memory.

    static void

    void

    Removes all cached tables from the in-memory cache.

    Applies a schema to a List of Java Beans.

    :: DeveloperApi :: Creates a DataFrame from a JList containing Rows using the given schema.

    Applies a schema to an RDD of Java Beans.

    :: DeveloperApi :: Creates a DataFrame from a JavaRDD containing Rows using the given schema.

    Applies a schema to an RDD of Java Beans.

    createDataFrame(RDD<A> rdd, scala.reflect.api.TypeTags.TypeTag<A> evidence$1)

    Creates a DataFrame from an RDD of Product (e.g.

    :: DeveloperApi :: Creates a DataFrame from an RDD containing Rows using the given schema.

    createDataFrame(scala.collection.immutable.Seq<A> data, scala.reflect.api.TypeTags.TypeTag<A> evidence$2)

    Creates a DataFrame from a local Seq of Product.

    Creates a Dataset from a JList of a given type.

    Creates a Dataset from an RDD of a given type.

    createDataset(scala.collection.immutable.Seq<T> data, Encoder<T> evidence$3)

    Creates a Dataset from a local Seq of data of a given type.

    void

    Drops the temporary table with the given table name in the catalog.

    Returns a DataFrame with no rows or columns.

    :: Experimental :: A collection of methods that are considered experimental, but can be used to hook into the query planner for advanced functionality.

    Return all the configuration properties that have been set (i.e.

    Return the value of Spark SQL configuration property for the given key.

    Return the value of Spark SQL configuration property for the given key.

    implicits()

    (Scala-specific) Implicit methods available in Scala for converting common Scala objects into DataFrames.

    boolean

    Returns true if the table is currently cached in-memory.

    jdbc(String url, String table, String columnName, long lowerBound, long upperBound, int numPartitions)

    jsonFile(String path, double samplingRatio)

    jsonRDD(RDD<String> json, double samplingRatio)

    An interface to register custom QueryExecutionListener that listen for execution metrics.

    Returns a SQLContext as new session, with separated SQL configurations, temporary tables, registered functions, but sharing the same SparkContext, cached data and other things.

    parquetFile(scala.collection.immutable.Seq<String> paths)

    range(long end)

    Creates a DataFrame with a single LongType column named id, containing elements in a range from 0 to end (exclusive) with step value 1.

    range(long start, long end)

    Creates a DataFrame with a single LongType column named id, containing elements in a range from start to end (exclusive) with step value 1.

    range(long start, long end, long step)

    Creates a DataFrame with a single LongType column named id, containing elements in a range from start to end (exclusive) with a step value.

    range(long start, long end, long step, int numPartitions)

    Creates a DataFrame with a single LongType column named id, containing elements in an range from start to end (exclusive) with an step value, with partition number specified.

    read()

    Returns a DataFrameReader that can be used to read non-streaming data in as a DataFrame.

    Returns a DataStreamReader that can be used to read streaming data in as a DataFrame.

    static void

    void

    Set the given Spark SQL configuration property.

    abstract void

    Set Spark SQL configuration properties.

    Executes a SQL query using Spark, returning the result as a DataFrame.

    streams()

    Returns a StreamingQueryManager that allows managing all the StreamingQueries active on this context.

    Returns the specified table as a DataFrame.

    Returns the names of tables in the current database as an array.

    Returns the names of tables in the given database as an array.

    tables()

    Returns a DataFrame containing names of existing tables in the current database.

    Returns a DataFrame containing names of existing tables in the given database.

    udf()

    A collection of methods for registering user-defined functions (UDF).

    void

    Removes the specified table from the in-memory cache.

    Methods inherited from interface org.apache.spark.internal.Logging

    initializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, isTraceEnabled, log, logBasedOnLevel, logDebug, logDebug, logDebug, logDebug, logError, logError, logError, logError, logInfo, logInfo, logInfo, logInfo, logName, LogStringContext, logTrace, logTrace, logTrace, logTrace, logWarning, logWarning, logWarning, logWarning, MDC, org$apache$spark$internal$Logging$$log_, org$apache$spark$internal$Logging$$log__$eq, withLogContext

  • Method Details

    • getOrCreate

    • setActive

    • clearActive

      public static void clearActive()

    • parquetFile

      Loads a Parquet file, returning the result as a DataFrame. This function returns an empty DataFrame if no paths are passed in.

      Parameters:
      paths - (undocumented)
      Returns:
      (undocumented)
    • sparkSession

    • sparkContext

    • newSession

      Returns a SQLContext as new session, with separated SQL configurations, temporary tables, registered functions, but sharing the same SparkContext, cached data and other things.

      Returns:
      (undocumented)
      Since:
      1.6.0
    • listenerManager

      An interface to register custom QueryExecutionListener that listen for execution metrics.

      Returns:
      (undocumented)
    • setConf

      public abstract void setConf(Properties props)

      Set Spark SQL configuration properties.

      Parameters:
      props - (undocumented)
      Since:
      1.0.0
    • setConf

      Set the given Spark SQL configuration property.

      Parameters:
      key - (undocumented)
      value - (undocumented)
      Since:
      1.0.0
    • getConf

      Return the value of Spark SQL configuration property for the given key.

      Parameters:
      key - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.0.0
    • getConf

      Return the value of Spark SQL configuration property for the given key. If the key is not set yet, return defaultValue.

      Parameters:
      key - (undocumented)
      defaultValue - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.0.0
    • getAllConfs

      public scala.collection.immutable.Map<String,String> getAllConfs()

      Return all the configuration properties that have been set (i.e. not the default). This creates a new copy of the config properties in the form of a Map.

      Returns:
      (undocumented)
      Since:
      1.0.0
    • experimental

      :: Experimental :: A collection of methods that are considered experimental, but can be used to hook into the query planner for advanced functionality.

      Returns:
      (undocumented)
      Since:
      1.3.0
    • emptyDataFrame

      Returns a DataFrame with no rows or columns.

      Returns:
      (undocumented)
      Since:
      1.3.0
    • udf

      A collection of methods for registering user-defined functions (UDF).

      The following example registers a Scala closure as UDF:

      
         sqlContext.udf.register("myUDF", (arg1: Int, arg2: String) => arg2 + arg1)
       

      The following example registers a UDF in Java:

      
         sqlContext.udf().register("myUDF",
             (Integer arg1, String arg2) -> arg2 + arg1,
             DataTypes.StringType);
       
      Returns:
      (undocumented)
      Since:
      1.3.0
      Note:
      The user-defined functions must be deterministic. Due to optimization, duplicate invocations may be eliminated or the function may even be invoked more times than it is present in the query.
    • implicits

      (Scala-specific) Implicit methods available in Scala for converting common Scala objects into DataFrames.

      
         val sqlContext = new SQLContext(sc)
         import sqlContext.implicits._
       
      Returns:
      (undocumented)
      Since:
      1.3.0
    • isCached

      public boolean isCached(String tableName)

      Returns true if the table is currently cached in-memory.

      Parameters:
      tableName - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.3.0
    • cacheTable

      public void cacheTable(String tableName)

      Caches the specified table in-memory.

      Parameters:
      tableName - (undocumented)
      Since:
      1.3.0
    • uncacheTable

      public void uncacheTable(String tableName)

      Removes the specified table from the in-memory cache.

      Parameters:
      tableName - (undocumented)
      Since:
      1.3.0
    • clearCache

      public void clearCache()

      Removes all cached tables from the in-memory cache.

      Since:
      1.3.0
    • createDataFrame

      public <A extends scala.Product> Dataset<Row> createDataFrame(RDD<A> rdd, scala.reflect.api.TypeTags.TypeTag<A> evidence$1)

      Creates a DataFrame from an RDD of Product (e.g. case classes, tuples).

      Parameters:
      rdd - (undocumented)
      evidence$1 - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.3.0
    • createDataFrame

      public <A extends scala.Product> Dataset<Row> createDataFrame(scala.collection.immutable.Seq<A> data, scala.reflect.api.TypeTags.TypeTag<A> evidence$2)

      Creates a DataFrame from a local Seq of Product.

      Parameters:
      data - (undocumented)
      evidence$2 - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.3.0
    • baseRelationToDataFrame

      Convert a BaseRelation created for external data sources into a DataFrame.

      Parameters:
      baseRelation - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.3.0
    • createDataFrame

      :: DeveloperApi :: Creates a DataFrame from an RDD containing Rows using the given schema. It is important to make sure that the structure of every Row of the provided RDD matches the provided schema. Otherwise, there will be runtime exception. Example:

      
        import org.apache.spark.sql._
        import org.apache.spark.sql.types._
        val sqlContext = new org.apache.spark.sql.SQLContext(sc)
      
        val schema =
          StructType(
            StructField("name", StringType, false) ::
            StructField("age", IntegerType, true) :: Nil)
      
        val people =
          sc.textFile("examples/src/main/resources/people.txt").map(
            _.split(",")).map(p => Row(p(0), p(1).trim.toInt))
        val dataFrame = sqlContext.createDataFrame(people, schema)
        dataFrame.printSchema
        // root
        // |-- name: string (nullable = false)
        // |-- age: integer (nullable = true)
      
        dataFrame.createOrReplaceTempView("people")
        sqlContext.sql("select name from people").collect.foreach(println)
       
      Parameters:
      rowRDD - (undocumented)
      schema - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.3.0
    • createDataset

      public <T> Dataset<T> createDataset(scala.collection.immutable.Seq<T> data, Encoder<T> evidence$3)

      Creates a Dataset from a local Seq of data of a given type. This method requires an encoder (to convert a JVM object of type T to and from the internal Spark SQL representation) that is generally created automatically through implicits from a SparkSession, or can be created explicitly by calling static methods on Encoders.

      ==Example==

      
      
         import spark.implicits._
         case class Person(name: String, age: Long)
         val data = Seq(Person("Michael", 29), Person("Andy", 30), Person("Justin", 19))
         val ds = spark.createDataset(data)
      
         ds.show()
         // +-------+---+
         // |   name|age|
         // +-------+---+
         // |Michael| 29|
         // |   Andy| 30|
         // | Justin| 19|
         // +-------+---+
       
      Parameters:
      data - (undocumented)
      evidence$3 - (undocumented)
      Returns:
      (undocumented)
      Since:
      2.0.0
    • createDataset

      public <T> Dataset<T> createDataset(RDD<T> data, Encoder<T> evidence$4)

      Creates a Dataset from an RDD of a given type. This method requires an encoder (to convert a JVM object of type T to and from the internal Spark SQL representation) that is generally created automatically through implicits from a SparkSession, or can be created explicitly by calling static methods on Encoders.

      Parameters:
      data - (undocumented)
      evidence$4 - (undocumented)
      Returns:
      (undocumented)
      Since:
      2.0.0
    • createDataset

      Creates a Dataset from a JList of a given type. This method requires an encoder (to convert a JVM object of type T to and from the internal Spark SQL representation) that is generally created automatically through implicits from a SparkSession, or can be created explicitly by calling static methods on Encoders.

      ==Java Example==

      
           List<String> data = Arrays.asList("hello", "world");
           Dataset<String> ds = spark.createDataset(data, Encoders.STRING());
       
      Parameters:
      data - (undocumented)
      evidence$5 - (undocumented)
      Returns:
      (undocumented)
      Since:
      2.0.0
    • createDataFrame

      :: DeveloperApi :: Creates a DataFrame from a JavaRDD containing Rows using the given schema. It is important to make sure that the structure of every Row of the provided RDD matches the provided schema. Otherwise, there will be runtime exception.

      Parameters:
      rowRDD - (undocumented)
      schema - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.3.0
    • createDataFrame

      :: DeveloperApi :: Creates a DataFrame from a JList containing Rows using the given schema. It is important to make sure that the structure of every Row of the provided List matches the provided schema. Otherwise, there will be runtime exception.

      Parameters:
      rows - (undocumented)
      schema - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.6.0
    • createDataFrame

      Applies a schema to an RDD of Java Beans.

      WARNING: Since there is no guaranteed ordering for fields in a Java Bean, SELECT * queries will return the columns in an undefined order.

      Parameters:
      rdd - (undocumented)
      beanClass - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.3.0
    • createDataFrame

      Applies a schema to an RDD of Java Beans.

      WARNING: Since there is no guaranteed ordering for fields in a Java Bean, SELECT * queries will return the columns in an undefined order.

      Parameters:
      rdd - (undocumented)
      beanClass - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.3.0
    • createDataFrame

      Applies a schema to a List of Java Beans.

      WARNING: Since there is no guaranteed ordering for fields in a Java Bean, SELECT * queries will return the columns in an undefined order.

      Parameters:
      data - (undocumented)
      beanClass - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.6.0
    • read

      Returns a DataFrameReader that can be used to read non-streaming data in as a DataFrame.

      
         sqlContext.read.parquet("/path/to/file.parquet")
         sqlContext.read.schema(schema).json("/path/to/file.json")
       
      Returns:
      (undocumented)
      Since:
      1.4.0
    • readStream

      Returns a DataStreamReader that can be used to read streaming data in as a DataFrame.

      
         sparkSession.readStream.parquet("/path/to/directory/of/parquet/files")
         sparkSession.readStream.schema(schema).json("/path/to/directory/of/json/files")
       
      Returns:
      (undocumented)
      Since:
      2.0.0
    • createExternalTable

      Creates an external table from the given path and returns the corresponding DataFrame. It will use the default data source configured by spark.sql.sources.default.

      Parameters:
      tableName - (undocumented)
      path - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.3.0
    • createExternalTable

      Creates an external table from the given path based on a data source and returns the corresponding DataFrame.

      Parameters:
      tableName - (undocumented)
      path - (undocumented)
      source - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.3.0
    • createExternalTable

      Creates an external table from the given path based on a data source and a set of options. Then, returns the corresponding DataFrame.

      Parameters:
      tableName - (undocumented)
      source - (undocumented)
      options - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.3.0
    • createExternalTable

      (Scala-specific) Creates an external table from the given path based on a data source and a set of options. Then, returns the corresponding DataFrame.

      Parameters:
      tableName - (undocumented)
      source - (undocumented)
      options - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.3.0
    • createExternalTable

      Create an external table from the given path based on a data source, a schema and a set of options. Then, returns the corresponding DataFrame.

      Parameters:
      tableName - (undocumented)
      source - (undocumented)
      schema - (undocumented)
      options - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.3.0
    • createExternalTable

      (Scala-specific) Create an external table from the given path based on a data source, a schema and a set of options. Then, returns the corresponding DataFrame.

      Parameters:
      tableName - (undocumented)
      source - (undocumented)
      schema - (undocumented)
      options - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.3.0
    • dropTempTable

      public void dropTempTable(String tableName)

      Drops the temporary table with the given table name in the catalog. If the table has been cached/persisted before, it's also unpersisted.

      Parameters:
      tableName - the name of the table to be unregistered.
      Since:
      1.3.0
    • range

      Creates a DataFrame with a single LongType column named id, containing elements in a range from 0 to end (exclusive) with step value 1.

      Parameters:
      end - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.4.1
    • range

      public Dataset<Row> range(long start, long end)

      Creates a DataFrame with a single LongType column named id, containing elements in a range from start to end (exclusive) with step value 1.

      Parameters:
      start - (undocumented)
      end - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.4.0
    • range

      public Dataset<Row> range(long start, long end, long step)

      Creates a DataFrame with a single LongType column named id, containing elements in a range from start to end (exclusive) with a step value.

      Parameters:
      start - (undocumented)
      end - (undocumented)
      step - (undocumented)
      Returns:
      (undocumented)
      Since:
      2.0.0
    • range

      public Dataset<Row> range(long start, long end, long step, int numPartitions)

      Creates a DataFrame with a single LongType column named id, containing elements in an range from start to end (exclusive) with an step value, with partition number specified.

      Parameters:
      start - (undocumented)
      end - (undocumented)
      step - (undocumented)
      numPartitions - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.4.0
    • sql

      Executes a SQL query using Spark, returning the result as a DataFrame. This API eagerly runs DDL/DML commands, but not for SELECT queries.

      Parameters:
      sqlText - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.3.0
    • table

      Returns the specified table as a DataFrame.

      Parameters:
      tableName - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.3.0
    • tables

      Returns a DataFrame containing names of existing tables in the current database. The returned DataFrame has three columns, database, tableName and isTemporary (a Boolean indicating if a table is a temporary one or not).

      Returns:
      (undocumented)
      Since:
      1.3.0
    • tables

      Returns a DataFrame containing names of existing tables in the given database. The returned DataFrame has three columns, database, tableName and isTemporary (a Boolean indicating if a table is a temporary one or not).

      Parameters:
      databaseName - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.3.0
    • streams

      Returns a StreamingQueryManager that allows managing all the StreamingQueries active on this context.

      Returns:
      (undocumented)
      Since:
      2.0.0
    • tableNames

      public String[] tableNames()

      Returns the names of tables in the current database as an array.

      Returns:
      (undocumented)
      Since:
      1.3.0
    • tableNames

      Returns the names of tables in the given database as an array.

      Parameters:
      databaseName - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.3.0
    • applySchema

      Parameters:
      rowRDD - (undocumented)
      schema - (undocumented)
      Returns:
      (undocumented)
    • applySchema

      Parameters:
      rowRDD - (undocumented)
      schema - (undocumented)
      Returns:
      (undocumented)
    • applySchema

      Parameters:
      rdd - (undocumented)
      beanClass - (undocumented)
      Returns:
      (undocumented)
    • applySchema

      Parameters:
      rdd - (undocumented)
      beanClass - (undocumented)
      Returns:
      (undocumented)
    • parquetFile

      public Dataset<Row> parquetFile(scala.collection.immutable.Seq<String> paths)

      Loads a Parquet file, returning the result as a DataFrame. This function returns an empty DataFrame if no paths are passed in.

      Parameters:
      paths - (undocumented)
      Returns:
      (undocumented)
    • jsonFile

      Loads a JSON file (one object per line), returning the result as a DataFrame. It goes through the entire dataset once to determine the schema.

      Parameters:
      path - (undocumented)
      Returns:
      (undocumented)
    • jsonFile

      Loads a JSON file (one object per line) and applies the given schema, returning the result as a DataFrame.

      Parameters:
      path - (undocumented)
      schema - (undocumented)
      Returns:
      (undocumented)
    • jsonFile

      Parameters:
      path - (undocumented)
      samplingRatio - (undocumented)
      Returns:
      (undocumented)
    • jsonRDD

      Loads an RDD[String] storing JSON objects (one object per record), returning the result as a DataFrame. It goes through the entire dataset once to determine the schema.

      Parameters:
      json - (undocumented)
      Returns:
      (undocumented)
    • jsonRDD

      Loads an RDD[String] storing JSON objects (one object per record), returning the result as a DataFrame. It goes through the entire dataset once to determine the schema.

      Parameters:
      json - (undocumented)
      Returns:
      (undocumented)
    • jsonRDD

      Loads an RDD[String] storing JSON objects (one object per record) and applies the given schema, returning the result as a DataFrame.

      Parameters:
      json - (undocumented)
      schema - (undocumented)
      Returns:
      (undocumented)
    • jsonRDD

      Loads an JavaRDD[String] storing JSON objects (one object per record) and applies the given schema, returning the result as a DataFrame.

      Parameters:
      json - (undocumented)
      schema - (undocumented)
      Returns:
      (undocumented)
    • jsonRDD

      Loads an RDD[String] storing JSON objects (one object per record) inferring the schema, returning the result as a DataFrame.

      Parameters:
      json - (undocumented)
      samplingRatio - (undocumented)
      Returns:
      (undocumented)
    • jsonRDD

      Loads a JavaRDD[String] storing JSON objects (one object per record) inferring the schema, returning the result as a DataFrame.

      Parameters:
      json - (undocumented)
      samplingRatio - (undocumented)
      Returns:
      (undocumented)
    • load

      Returns the dataset stored at path as a DataFrame, using the default data source configured by spark.sql.sources.default.

      Parameters:
      path - (undocumented)
      Returns:
      (undocumented)
    • load

      Returns the dataset stored at path as a DataFrame, using the given data source.

      Parameters:
      path - (undocumented)
      source - (undocumented)
      Returns:
      (undocumented)
    • load

      (Java-specific) Returns the dataset specified by the given data source and a set of options as a DataFrame.

      Parameters:
      source - (undocumented)
      options - (undocumented)
      Returns:
      (undocumented)
    • load

      (Scala-specific) Returns the dataset specified by the given data source and a set of options as a DataFrame.

      Parameters:
      source - (undocumented)
      options - (undocumented)
      Returns:
      (undocumented)
    • load

      (Java-specific) Returns the dataset specified by the given data source and a set of options as a DataFrame, using the given schema as the schema of the DataFrame.

      Parameters:
      source - (undocumented)
      schema - (undocumented)
      options - (undocumented)
      Returns:
      (undocumented)
    • load

      (Scala-specific) Returns the dataset specified by the given data source and a set of options as a DataFrame, using the given schema as the schema of the DataFrame.

      Parameters:
      source - (undocumented)
      schema - (undocumented)
      options - (undocumented)
      Returns:
      (undocumented)
    • jdbc

      Construct a DataFrame representing the database table accessible via JDBC URL url named table.

      Parameters:
      url - (undocumented)
      table - (undocumented)
      Returns:
      (undocumented)
    • jdbc

      public Dataset<Row> jdbc(String url, String table, String columnName, long lowerBound, long upperBound, int numPartitions)

      Construct a DataFrame representing the database table accessible via JDBC URL url named table. Partitions of the table will be retrieved in parallel based on the parameters passed to this function.

      Parameters:
      columnName - the name of a column of integral type that will be used for partitioning.
      lowerBound - the minimum value of columnName used to decide partition stride
      upperBound - the maximum value of columnName used to decide partition stride
      numPartitions - the number of partitions. the range minValue-maxValue will be split evenly into this many partitions
      url - (undocumented)
      table - (undocumented)
      Returns:
      (undocumented)
    • jdbc

      Construct a DataFrame representing the database table accessible via JDBC URL url named table. The theParts parameter gives a list expressions suitable for inclusion in WHERE clauses; each one defines one partition of the DataFrame.

      Parameters:
      url - (undocumented)
      table - (undocumented)
      theParts - (undocumented)
      Returns:
      (undocumented)