functional | Modular

Python module

Functional-style wrappers for tensor operations that work in both graph construction and eager execution contexts.

Provides functional APIs for tensor operations.

This module provides functional-style tensor operations that work seamlessly with both MAX Graph construction and eager Tensor execution. All operations are wrapped versions of the core graph operations that automatically handle different execution contexts. These operations can be used in both graph construction and eager execution.

CustomExtensionType

max.experimental.functional.CustomExtensionType: TypeAlias = str | pathlib._local.Path

source

Type alias for custom extensions paths, matching engine.CustomExtensionsType.

abs()

max.experimental.functional.abs(x)

source

Computes the absolute value element-wise. See max.graph.ops.abs() for details.

Parameters:

x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)

Return type:

TensorValue

add()

max.experimental.functional.add(lhs, rhs)

source

Adds two tensors element-wise. See max.graph.ops.add() for details.

Parameters:

Return type:

TensorValue

allgather()

max.experimental.functional.allgather(inputs, signal_buffers, axis=0)

source

Concatenate values from multiple devices. See max.graph.ops.allgather() for details.

Parameters:

Return type:

list[TensorValue]

allreduce_sum()

max.experimental.functional.allreduce_sum(inputs, signal_buffers)

source

Sum values from multiple devices. See max.graph.ops.allreduce.sum() for details.

Parameters:

Return type:

list[TensorValue]

arange()

max.experimental.functional.arange(start, stop, step=1, out_dim=None, *, dtype, device)

source

Creates a tensor with evenly spaced values. See max.graph.ops.range() for details.

Parameters:

Return type:

TensorValue

argmax()

max.experimental.functional.argmax(x, axis=-1)

source

Returns the indices of the maximum values along an axis.

Parameters:

Returns:

A tensor containing the indices of the maximum values.

Return type:

TensorValue

argmin()

max.experimental.functional.argmin(x, axis=-1)

source

Returns the indices of the minimum values along an axis.

Parameters:

Returns:

A tensor containing the indices of the minimum values.

Return type:

TensorValue

argsort()

max.experimental.functional.argsort(x, ascending=True)

source

Returns the indices that would sort a tensor along an axis. See max.graph.ops.argsort() for details.

Parameters:

Return type:

TensorValue

as_interleaved_complex()

max.experimental.functional.as_interleaved_complex(x)

source

Converts a tensor to interleaved complex representation. See max.graph.ops.as_interleaved_complex() for details.

Parameters:

x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)

Return type:

TensorValue

atanh()

max.experimental.functional.atanh(x)

source

Computes the inverse hyperbolic tangent element-wise. See max.graph.ops.atanh() for details.

Parameters:

x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)

Return type:

TensorValue

avg_pool2d()

max.experimental.functional.avg_pool2d(input, kernel_size, stride=1, dilation=1, padding=0, ceil_mode=False, count_boundary=True)

source

Applies 2D average pooling. See max.graph.ops.avg_pool2d() for details.

Parameters:

Return type:

TensorValue

band_part()

max.experimental.functional.band_part(x, num_lower=None, num_upper=None, exclude=False)

source

Copies a tensor setting everything outside a central band to zero. See max.graph.ops.band_part() for details.

Parameters:

Return type:

TensorValue

broadcast_to()

max.experimental.functional.broadcast_to(x, shape, out_dims=None)

source

Broadcasts a tensor to a new shape. See max.graph.ops.broadcast_to() for details.

Parameters:

Return type:

TensorValue

buffer_store()

max.experimental.functional.buffer_store(destination, source)

source

Sets a tensor buffer to new values. See max.graph.ops.buffer_store() for details.

Parameters:

Return type:

None

buffer_store_slice()

max.experimental.functional.buffer_store_slice(destination, source, indices)

source

Sets a slice of a tensor buffer to new values. See max.graph.ops.buffer_store_slice() for details.

Parameters:

Return type:

None

cast()

max.experimental.functional.cast(x, dtype)

source

Casts a tensor to a different data type. See max.graph.ops.cast() for details.

Parameters:

Return type:

TensorValue

chunk()

max.experimental.functional.chunk(x, chunks, axis=0)

source

Splits a tensor into chunks along a dimension. See max.graph.ops.chunk() for details.

Parameters:

Return type:

list[TensorValue]

complex_mul()

max.experimental.functional.complex_mul(lhs, rhs)

source

Multiply two complex-valued tensors. See max.graph.ops.complex.mul() for details.

Parameters:

Return type:

TensorValue

concat()

max.experimental.functional.concat(original_vals, axis=0)

source

Concatenates a list of tensors along an axis. See max.graph.ops.concat() for details.

Parameters:

Return type:

TensorValue

constant()

max.experimental.functional.constant(value, dtype=None, device=None)

source

Creates a constant tensor. See max.graph.ops.constant() for details.

Parameters:

Return type:

TensorValue

constant_external()

max.experimental.functional.constant_external(name, type)

source

Creates a constant tensor from external data. See max.graph.ops.constant_external() for details.

Parameters:

Return type:

TensorValue

conv2d()

max.experimental.functional.conv2d(x, filter, stride=(1, 1), dilation=(1, 1), padding=(0, 0, 0, 0), groups=1, bias=None, input_layout=ConvInputLayout.NHWC, filter_layout=FilterLayout.RSCF)

source

Applies 2D convolution. See max.graph.ops.conv2d() for details.

Parameters:

Return type:

TensorValue

conv2d_transpose()

max.experimental.functional.conv2d_transpose(x, filter, stride=(1, 1), dilation=(1, 1), padding=(0, 0, 0, 0), output_paddings=(0, 0), bias=None, input_layout=ConvInputLayout.NHWC, filter_layout=FilterLayout.RSCF)

source

Applies 2D transposed convolution. See max.graph.ops.conv2d_transpose() for details.

Parameters:

Return type:

TensorValue

conv3d()

max.experimental.functional.conv3d(x, filter, stride=(1, 1, 1), dilation=(1, 1, 1), padding=(0, 0, 0, 0, 0, 0), groups=1, bias=None, input_layout=ConvInputLayout.NHWC, filter_layout=FilterLayout.QRSCF)

source

Applies 3D convolution. See max.graph.ops.conv3d() for details.

Parameters:

Return type:

TensorValue

cos()

max.experimental.functional.cos(x)

source

Computes the cosine element-wise. See max.graph.ops.cos() for details.

Parameters:

x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)

Return type:

TensorValue

cumsum()

max.experimental.functional.cumsum(x, axis=-1, exclusive=False, reverse=False)

source

Computes the cumulative sum along an axis. See max.graph.ops.cumsum() for details.

Parameters:

Return type:

TensorValue

custom()

max.experimental.functional.custom(name, device, values, out_types, parameters=None, custom_extensions=None)

source

Applies a custom operation with optional custom extension loading.

Creates a node to execute a custom graph operation. The custom op should be registered by annotating a Mojo function with the @compiler.register decorator.

This function extends max.graph.ops.custom() with automatic loading of custom extension libraries, eliminating the need to manually import kernels before use.

Example:

from max.experimental import functional as F
from max.experimental.tensor import Tensor
from max.dtype import DType
from max.driver import CPU

x = Tensor.full([10], 10, dtype=DType.float32, device=CPU())
y = Tensor.ones([10], dtype=DType.float32, device=CPU())

result = F.custom(
    "vector_sum",
    device=x.device,
    values=[x, y],
    out_types=[x.type],
    custom_extensions="ops.mojopkg"
)[0]

Parameters:

  • name (str) – The op name provided to @compiler.register.
  • device (driver.Device | DeviceRef) – Device that the op is assigned to. This becomes a target parameter to the kernel.
  • values (Sequence[Value[Any]]) – The op function’s arguments.
  • out_types (Sequence[Type[Any]]) – The list of op function’s return types.
  • parameters (Mapping[str, bool | int | str | DType] | None) – Dictionary of extra parameters expected by the kernel.
  • custom_extensions (CustomExtensionsType | None) – Paths to custom extension libraries (.mojopkg files or Mojo source directories). Extensions are automatically loaded into the current graph if not already present.

Returns:

Symbolic values representing the outputs of the op in the graph. These correspond 1:1 with the types passed as out_types.

Return type:

list[Value[Any]]

div()

max.experimental.functional.div(lhs, rhs)

source

Divides two tensors element-wise. See max.graph.ops.div() for details.

Parameters:

Return type:

TensorValue

equal()

max.experimental.functional.equal(lhs, rhs)

source

Computes element-wise equality comparison. See max.graph.ops.equal() for details.

Parameters:

Return type:

TensorValue

erf()

max.experimental.functional.erf(x)

source

Computes the error function element-wise. See max.graph.ops.erf() for details.

Parameters:

x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)

Return type:

TensorValue

exp()

max.experimental.functional.exp(x)

source

Computes the exponential element-wise. See max.graph.ops.exp() for details.

Parameters:

x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)

Return type:

TensorValue

flatten()

max.experimental.functional.flatten(x, start_dim=0, end_dim=-1)

source

Flattens a tensor. See max.graph.ops.flatten() for details.

Parameters:

Return type:

TensorValue

floor()

max.experimental.functional.floor(x)

source

Computes the floor element-wise. See max.graph.ops.floor() for details.

Parameters:

x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)

Return type:

TensorValue

fold()

max.experimental.functional.fold(input, output_size, kernel_size, stride=1, dilation=1, padding=0)

source

Performs tensor folding operation. See max.graph.ops.fold() for details.

Parameters:

Return type:

TensorValue

functional()

max.experimental.functional.functional(op)

source

Converts a graph operation to support multiple tensor types.

Returns a wrapped op that can be called with various tensor types.

Parameters:

op (Callable[[...], Any])

Return type:

Callable[[…], Any]

gather()

max.experimental.functional.gather(input, indices, axis)

source

Gathers values along an axis specified by indices. See max.graph.ops.gather() for details.

Parameters:

Return type:

TensorValue

gather_nd()

max.experimental.functional.gather_nd(input, indices, batch_dims=0)

source

Gathers values using multi-dimensional indices. See max.graph.ops.gather_nd() for details.

Parameters:

Return type:

TensorValue

gelu()

max.experimental.functional.gelu(x, approximate='none')

source

Applies the Gaussian Error Linear Unit (GELU) activation. See max.graph.ops.gelu() for details.

Parameters:

greater()

max.experimental.functional.greater(lhs, rhs)

source

Computes element-wise greater-than comparison. See max.graph.ops.greater() for details.

Parameters:

Return type:

TensorValue

greater_equal()

max.experimental.functional.greater_equal(lhs, rhs)

source

Computes element-wise greater-than-or-equal comparison. See max.graph.ops.greater_equal() for details.

Parameters:

Return type:

TensorValue

hann_window()

max.experimental.functional.hann_window(window_length, device, periodic=True, dtype=float32)

source

Creates a Hann window. See max.graph.ops.hann_window() for details.

Parameters:

Return type:

TensorValue

in_graph_context()

max.experimental.functional.in_graph_context()

source

Checks whether the caller is inside a Graph context.

Returns:

True if inside a with Graph(...): block, False otherwise.

Return type:

bool

inplace_custom()

max.experimental.functional.inplace_custom(name, device, values, out_types=None, parameters=None, custom_extensions=None)

source

Applies an in-place custom operation with optional custom extension loading.

Creates a node to execute an in-place custom graph operation. The custom op should be registered by annotating a Mojo function with the @compiler.register decorator.

This function extends max.graph.ops.inplace_custom() with automatic loading of custom extension libraries, eliminating the need to manually import kernels before use.

Example:

from max.experimental import functional as F
from max.experimental.tensor import Tensor
from max.dtype import DType
from max.driver import CPU

# Create a buffer for in-place modification
data = Tensor.zeros([10], dtype=DType.float32, device=CPU())

# Use in-place custom op with inline extension loading
F.inplace_custom(
    "my_inplace_op",
    device=data.device,
    values=[data],
    custom_extensions="ops.mojopkg"
)

Parameters:

  • name (str) – The op name provided to @compiler.register.
  • device (driver.Device | DeviceRef) – Device that the op is assigned to. This becomes a target parameter to the kernel.
  • values (Sequence[Value[Any]]) – The op function’s arguments. At least one must be a BufferValue or _OpaqueValue.
  • out_types (Sequence[Type[Any]] | None) – The list of op function’s return types. Can be None if the operation has no outputs.
  • parameters (dict[str, bool | int | str | DType] | None) – Dictionary of extra parameters expected by the kernel.
  • custom_extensions (CustomExtensionsType | None) – Paths to custom extension libraries (.mojopkg files or Mojo source directories). Extensions are automatically loaded into the current graph if not already present.

Returns:

Symbolic values representing the outputs of the op in the graph.

Return type:

list[Value[Any]]

irfft()

max.experimental.functional.irfft(input_tensor, n=None, axis=-1, normalization=Normalization.BACKWARD, input_is_complex=False, buffer_size_mb=512)

source

Computes the inverse real FFT. See max.graph.ops.irfft() for details.

Parameters:

  • input_tensor (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue)
  • n (int | None)
  • axis (int)
  • normalization (Normalization | str)
  • input_is_complex (bool)
  • buffer_size_mb (int)

is_inf()

max.experimental.functional.is_inf(x)

source

Checks for infinite values element-wise. See max.graph.ops.is_inf() for details.

Parameters:

x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)

Return type:

TensorValue

is_nan()

max.experimental.functional.is_nan(x)

source

Checks for NaN values element-wise. See max.graph.ops.is_nan() for details.

Parameters:

x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)

Return type:

TensorValue

layer_norm()

max.experimental.functional.layer_norm(input, gamma, beta, epsilon)

source

Applies layer normalization. See max.graph.ops.layer_norm() for details.

Parameters:

Return type:

TensorValue

lazy()

max.experimental.functional.lazy()

source

Context manager for lazy tensor evaluation.

Within this context, tensor operations are recorded but not executed. Tensors remain unrealized until explicitly awaited via await tensor.realize or until their values are needed (e.g., by calling .item()).

This is particularly useful for creating tensors which may not ever be used. Lazy tensors that aren’t used will never allocate memory or perform operations.

Yields:

None

Return type:

Generator[None]

from max.experimental import functional as F
from max.experimental.tensor import Tensor
from max.experimental.nn import Linear

with F.lazy():
    model = Linear(2, 3)

print(model)  # Lazy weights not initialized
# Executing the model would be fine! The weights would be created
# on first use.
# output = model(Tensor.ones([5, 2]))

# Load pretrained weights, never creating the original random weights
weights =  {
    "weight": Tensor.zeros([3, 2]),
    "bias": Tensor.zeros([3]),
}
model.load_state_dict(weights)

log()

max.experimental.functional.log(x)

source

Computes the natural logarithm element-wise. See max.graph.ops.log() for details.

Parameters:

x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)

Return type:

TensorValue

log1p()

max.experimental.functional.log1p(x)

source

Computes log(1 + x) element-wise. See max.graph.ops.log1p() for details.

Parameters:

x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)

Return type:

TensorValue

logical_and()

max.experimental.functional.logical_and(lhs, rhs)

source

Computes element-wise logical AND. See max.graph.ops.logical_and() for details.

Parameters:

Return type:

TensorValue

logical_not()

max.experimental.functional.logical_not(x)

source

Computes element-wise logical NOT. See max.graph.ops.logical_not() for details.

Parameters:

x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)

Return type:

TensorValue

logical_or()

max.experimental.functional.logical_or(lhs, rhs)

source

Computes element-wise logical OR. See max.graph.ops.logical_or() for details.

Parameters:

Return type:

TensorValue

logical_xor()

max.experimental.functional.logical_xor(lhs, rhs)

source

Computes element-wise logical XOR. See max.graph.ops.logical_xor() for details.

Parameters:

Return type:

TensorValue

logsoftmax()

max.experimental.functional.logsoftmax(value, axis=-1)

source

Applies the log softmax function. See max.graph.ops.logsoftmax() for details.

Parameters:

Return type:

TensorValue

masked_scatter()

max.experimental.functional.masked_scatter(input, mask, updates, out_dim)

source

Scatters values according to a mask. See max.graph.ops.masked_scatter() for details.

Parameters:

Return type:

TensorValue

matmul()

max.experimental.functional.matmul(lhs, rhs)

source

Performs matrix multiplication. See max.graph.ops.matmul() for details.

Parameters:

Return type:

TensorValue

max()

max.experimental.functional.max(x, y=None, /, axis=-1)

source

Returns the maximum values along an axis, or elementwise maximum of two tensors.

Parameters:

Returns:

A tensor containing the maximum values.

Return type:

TensorValue

max_pool2d()

max.experimental.functional.max_pool2d(input, kernel_size, stride=1, dilation=1, padding=0, ceil_mode=False)

source

Applies 2D max pooling. See max.graph.ops.max_pool2d() for details.

Parameters:

Return type:

TensorValue

mean()

max.experimental.functional.mean(x, axis=-1)

source

Computes the mean along specified axes.

Parameters:

Returns:

A tensor containing the mean values.

Return type:

TensorValue

min()

max.experimental.functional.min(x, y=None, /, axis=-1)

source

Returns the minimum values along an axis, or elementwise minimum of two tensors.

Parameters:

Returns:

A tensor containing the minimum values.

Return type:

TensorValue

mod()

max.experimental.functional.mod(lhs, rhs)

source

Computes the modulo operation element-wise. See max.graph.ops.mod() for details.

Parameters:

Return type:

TensorValue

mul()

max.experimental.functional.mul(lhs, rhs)

source

Multiplies two tensors element-wise. See max.graph.ops.mul() for details.

Parameters:

Return type:

TensorValue

negate()

max.experimental.functional.negate(x)

source

Negates a tensor element-wise. See max.graph.ops.negate() for details.

Parameters:

x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)

Return type:

TensorValue

nonzero()

max.experimental.functional.nonzero(x, out_dim)

source

Returns the indices of non-zero elements. See max.graph.ops.nonzero() for details.

Parameters:

Return type:

TensorValue

not_equal()

max.experimental.functional.not_equal(lhs, rhs)

source

Computes element-wise inequality comparison. See max.graph.ops.not_equal() for details.

Parameters:

Return type:

TensorValue

outer()

max.experimental.functional.outer(lhs, rhs)

source

Computes the outer product of two vectors. See max.graph.ops.outer() for details.

Parameters:

Return type:

TensorValue

pad()

max.experimental.functional.pad(input, paddings, mode='constant', value=0)

source

Pads a tensor. See max.graph.ops.pad() for details.

Parameters:

Return type:

TensorValue

permute()

max.experimental.functional.permute(x, dims)

source

Permutes the dimensions of a tensor. See max.graph.ops.permute() for details.

Parameters:

Return type:

TensorValue

pow()

max.experimental.functional.pow(lhs, rhs)

source

Raises tensor elements to a power. See max.graph.ops.pow() for details.

Parameters:

Return type:

TensorValue

prod()

max.experimental.functional.prod(x, axis=-1)

source

Computes the product along specified axes.

Parameters:

Returns:

A tensor containing the product values.

Return type:

TensorValue

rebind()

max.experimental.functional.rebind(x, shape, message='', layout=None)

source

Rebinds the shape of a tensor, asserting dimension consistency at runtime. See max.graph.ops.rebind() for details.

Parameters:

Return type:

TensorValue

relu()

max.experimental.functional.relu(x)

source

Applies the ReLU activation function. See max.graph.ops.relu() for details.

Parameters:

x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)

Return type:

TensorValue

repeat_interleave()

max.experimental.functional.repeat_interleave(x, repeats, axis=None, out_dim=None)

source

Repeats elements of a tensor. See max.graph.ops.repeat_interleave() for details.

Parameters:

Return type:

TensorValue

reshape()

max.experimental.functional.reshape(x, shape)

source

Reshapes a tensor to a new shape. See max.graph.ops.reshape() for details.

Parameters:

Return type:

TensorValue

round()

max.experimental.functional.round(x)

source

Rounds tensor values element-wise. See max.graph.ops.round() for details.

Parameters:

x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)

Return type:

TensorValue

rsqrt()

max.experimental.functional.rsqrt(x)

source

Computes the reciprocal square root element-wise. See max.graph.ops.rsqrt() for details.

Parameters:

x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)

Return type:

TensorValue

scatter()

max.experimental.functional.scatter(input, updates, indices, axis=-1)

source

Scatters values along an axis. See max.graph.ops.scatter() for details.

Parameters:

Return type:

TensorValue

scatter_nd()

max.experimental.functional.scatter_nd(input, updates, indices)

source

Scatters values using multi-dimensional indices. See max.graph.ops.scatter_nd() for details.

Parameters:

Return type:

TensorValue

sigmoid()

max.experimental.functional.sigmoid(x)

source

Applies the sigmoid activation function. See max.graph.ops.sigmoid() for details.

Parameters:

x (TensorValue)

Return type:

TensorValue

silu()

max.experimental.functional.silu(x)

source

Applies the SiLU (Swish) activation function. See max.graph.ops.silu() for details.

Parameters:

x (TensorValue)

sin()

max.experimental.functional.sin(x)

source

Computes the sine element-wise. See max.graph.ops.sin() for details.

Parameters:

x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)

Return type:

TensorValue

slice_tensor()

max.experimental.functional.slice_tensor(x, indices)

source

Slices a tensor along specified dimensions. See max.graph.ops.slice_tensor() for details.

Parameters:

Return type:

TensorValue

softmax()

max.experimental.functional.softmax(value, axis=-1)

source

Applies the softmax function. See max.graph.ops.softmax() for details.

Parameters:

Return type:

TensorValue

split()

max.experimental.functional.split(x, split_size_or_sections, axis=0)

source

Splits a tensor into multiple tensors along a given dimension.

This function supports two modes, matching PyTorch’s behavior:

  • If split_size_or_sections is an int, splits into chunks of that size (the last chunk may be smaller if the dimension is not evenly divisible).
  • If split_size_or_sections is a list of ints, splits into chunks with exactly those sizes (must sum to the dimension size).
from max.experimental import functional as F, Tensor

x = Tensor.ones([10, 4])

# Split into chunks of size 3 (last chunk is size 1)
chunks = F.split(x, 3, axis=0)  # shapes: [3,4], [3,4], [3,4], [1,4]

# Split into exact sizes
chunks = F.split(x, [2, 3, 5], axis=0)  # shapes: [2,4], [3,4], [5,4]

Parameters:

  • x (Tensor | TensorValue) – The input tensor to split.
  • split_size_or_sections (int | list[int]) – Either an int (chunk size) or a list of ints (exact sizes for each output tensor).
  • axis (int) – The dimension along which to split. Defaults to 0.

Returns:

A list of tensors resulting from the split.

Return type:

list[Tensor] | list[TensorValue]

sqrt()

max.experimental.functional.sqrt(x)

source

Computes the square root element-wise. See max.graph.ops.sqrt() for details.

Parameters:

x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)

Return type:

TensorValue

squeeze()

max.experimental.functional.squeeze(x, axis)

source

Removes dimensions of size 1. See max.graph.ops.squeeze() for details.

Parameters:

Return type:

TensorValue

stack()

max.experimental.functional.stack(values, axis=0)

source

Stacks tensors along a new dimension. See max.graph.ops.stack() for details.

Parameters:

Return type:

TensorValue

sub()

max.experimental.functional.sub(lhs, rhs)

source

Subtracts two tensors element-wise. See max.graph.ops.sub() for details.

Parameters:

Return type:

TensorValue

sum()

max.experimental.functional.sum(x, axis=-1)

source

Computes the sum along specified axes.

Parameters:

Returns:

A tensor containing the sum values.

Return type:

TensorValue

tanh()

max.experimental.functional.tanh(x)

source

Computes the hyperbolic tangent element-wise. See max.graph.ops.tanh() for details.

Parameters:

x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)

Return type:

TensorValue

tile()

max.experimental.functional.tile(x, repeats)

source

Tiles a tensor by repeating it. See max.graph.ops.tile() for details.

Parameters:

Return type:

TensorValue

top_k()

max.experimental.functional.top_k(input, k, axis=-1)

source

Returns the k largest elements along an axis. See max.graph.ops.top_k() for details.

Parameters:

Return type:

tuple[TensorValue, TensorValue]

transfer_to()

max.experimental.functional.transfer_to(x, device)

source

Transfers a tensor to a specified device. See max.graph.ops.transfer_to() for details.

Parameters:

Return type:

TensorValue

transpose()

max.experimental.functional.transpose(x, axis_1, axis_2)

source

Transposes a tensor. See max.graph.ops.transpose() for details.

Parameters:

Return type:

TensorValue

trunc()

max.experimental.functional.trunc(x)

source

Truncates tensor values element-wise. See max.graph.ops.trunc() for details.

Parameters:

x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)

Return type:

TensorValue

unsqueeze()

max.experimental.functional.unsqueeze(x, axis)

source

Adds dimensions of size 1. See max.graph.ops.unsqueeze() for details.

Parameters:

Return type:

TensorValue

where()

max.experimental.functional.where(condition, x, y)

source

Selects elements from two tensors based on a condition. See max.graph.ops.where() for details.

Parameters:

Return type:

TensorValue