All Implemented Interfaces:
Serializable, org.apache.spark.internal.Logging, TargetEncoderBase, Params, HasHandleInvalid, HasInputCol, HasInputCols, HasLabelCol, HasOutputCol, HasOutputCols, DefaultParamsWritable, Identifiable, MLWritable

Target Encoding maps a column of categorical indices into a numerical feature derived from the target.

When handleInvalid is configured to 'keep', previously unseen values of a feature are mapped to the dataset overall statistics.

When 'targetType' is configured to 'binary', categories are encoded as the conditional probability of the target given that category (bin counting). When 'targetType' is configured to 'continuous', categories are encoded as the average of the target given that category (mean encoding)

Parameter 'smoothing' controls how in-category stats and overall stats are weighted.

See Also:
  • StringIndexer for converting categorical values into category indices
  • Serialized Form
Note:
When encoding multi-column by using inputCols and outputCols params, input/output cols come in pairs, specified by the order in the arrays, and each pair is treated independently.
  • Nested Class Summary

    Nested classes/interfaces inherited from interface org.apache.spark.internal.Logging

    org.apache.spark.internal.Logging.LogStringContext, org.apache.spark.internal.Logging.SparkShellLoggingFilter

  • Constructor Summary

    Constructors

  • Method Summary

    Creates a copy of this instance with the same UID and some extra params.

    Fits a model to the input data.

    Param for how to handle invalid data during transform().

    inputCol()

    Param for input column name.

    inputCols()

    Param for input column names.

    labelCol()

    Param for label column name.

    outputCol()

    Param for output column name.

    Param for output column names.

    read()

    setSmoothing(double value)

    smoothing()

    Check transform validity and derive the output schema from the input schema.

    uid()

    An immutable unique ID for the object and its derivatives.

    Methods inherited from interface org.apache.spark.internal.Logging

    initializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, isTraceEnabled, log, logBasedOnLevel, logDebug, logDebug, logDebug, logDebug, logError, logError, logError, logError, logInfo, logInfo, logInfo, logInfo, logName, LogStringContext, logTrace, logTrace, logTrace, logTrace, logWarning, logWarning, logWarning, logWarning, MDC, org$apache$spark$internal$Logging$$log_, org$apache$spark$internal$Logging$$log__$eq, withLogContext

    Methods inherited from interface org.apache.spark.ml.util.MLWritable

    save

  • Constructor Details

    • TargetEncoder

      public TargetEncoder(String uid)

    • TargetEncoder

      public TargetEncoder()

  • Method Details

    • load

    • read

    • handleInvalid

      Param for how to handle invalid data during transform(). Options are 'keep' (invalid data presented as an extra categorical feature) or 'error' (throw an error). Note that this Param is only used during transform; during fitting, invalid data will result in an error. Default: "error"

      Specified by:
      handleInvalid in interface HasHandleInvalid
      Specified by:
      handleInvalid in interface TargetEncoderBase
      Returns:
      (undocumented)
    • targetType

      Specified by:
      targetType in interface TargetEncoderBase
    • smoothing

      Specified by:
      smoothing in interface TargetEncoderBase
    • outputCols

      Param for output column names.

      Specified by:
      outputCols in interface HasOutputCols
      Returns:
      (undocumented)
    • outputCol

      Param for output column name.

      Specified by:
      outputCol in interface HasOutputCol
      Returns:
      (undocumented)
    • inputCols

      Param for input column names.

      Specified by:
      inputCols in interface HasInputCols
      Returns:
      (undocumented)
    • inputCol

      Description copied from interface: HasInputCol

      Param for input column name.

      Specified by:
      inputCol in interface HasInputCol
      Returns:
      (undocumented)
    • labelCol

      Description copied from interface: HasLabelCol

      Param for label column name.

      Specified by:
      labelCol in interface HasLabelCol
      Returns:
      (undocumented)
    • uid

      An immutable unique ID for the object and its derivatives.

      Specified by:
      uid in interface Identifiable
      Returns:
      (undocumented)
    • setLabelCol

    • setInputCol

    • setOutputCol

    • setInputCols

    • setOutputCols

    • setHandleInvalid

    • setTargetType

    • setSmoothing

    • transformSchema

      Check transform validity and derive the output schema from the input schema.

      We check validity for interactions between parameters during transformSchema and raise an exception if any parameter value is invalid. Parameter value checks which do not depend on other parameters are handled by Param.validate().

      Typical implementation should first conduct verification on schema change and parameter validity, including complex parameter interaction checks.

      Specified by:
      transformSchema in class PipelineStage
      Parameters:
      schema - (undocumented)
      Returns:
      (undocumented)
    • fit

      Description copied from class: Estimator

      Fits a model to the input data.

      Specified by:
      fit in class Estimator<TargetEncoderModel>
      Parameters:
      dataset - (undocumented)
      Returns:
      (undocumented)
    • copy

      Description copied from interface: Params

      Creates a copy of this instance with the same UID and some extra params. Subclasses should implement this method and set the return type properly. See defaultCopy().

      Specified by:
      copy in interface Params
      Specified by:
      copy in class Estimator<TargetEncoderModel>
      Parameters:
      extra - (undocumented)
      Returns:
      (undocumented)