Sort `Instruction` variants + Ensure each holds a singular oparg type by ShaharNaveh · Pull Request #6312 · RustPython/RustPython

238-251: Tuple-style emit! macro, EmitArg, and const-load helpers look correct

The new emit! arm for Instruction::$op(arg) correctly routes through emit_arg, and the EmitArg impls for OpArgType and ir::BlockIdx ensure that:

  • data-bearing opcodes get their OpArg from OpArgMarker::new(...), and
  • label-based opcodes (e.g. Jump, Break, Continue) store the label in InstructionInfo::target with a marker created via OpArgMarker::marker().

emit_load_const / emit_return_const and the emit_return_value peephole now construct and rewrite LoadConst(idx) / ReturnConst(idx) using the marker form, reusing the existing encoded OpArg under the hood. This matches the new enum shape and keeps indices and labels type-safe. Based on the provided marker() definition, this use of OpArgMarker::marker() as a zero-sized type marker is consistent.

Also applies to: 5091-5120, 5454-5475


430-487: BuildSlice refactor keeps 2-/3-argument slice semantics intact

In compile_subscript and the standalone slice expression, Instruction::BuildSlice(n == 3) and Instruction::BuildSlice(step) move from struct fields to tuple-style args while still using a boolean to distinguish 2 vs 3 stack operands. That matches CPython’s BINARY_SLICE/BUILD_SLICE contract and preserves the existing control flow around 2-element vs stepped slices.

Also applies to: 4390-4407


497-538: Collection-building opcodes (Build*, Unpack*) are converted mechanically and stay consistent

Across starunpack_helper, default-argument handling, dict construction, lambda defaults, gather_elements, and f-string assembly you’ve systematically switched:

  • BuildTuple{...}, BuildList{...}, BuildSet{...}, BuildMap{...}, BuildString{...}BuildTuple(size), BuildList(size), BuildSet(size), BuildMap(size), BuildString(count), with size/count always a u32 derived from lengths (to_u32(), try_from), and
  • UnpackSequence{...}, UnpackEx{...}UnpackSequence(n), UnpackEx(args).

The stack shapes (how many elements are consumed/produced) and the computed counts are unchanged; only the enum constructor syntax and arg plumbing differ. No off-by-one or overflow risks are apparent in the new uses.

Also applies to: 2079-2122, 4305-4323, 4475-4501, 4839-4881, 5340-5341, 5447-5448


783-799: Resume opcodes now take raw u32 tags; usage is consistent across scopes and suspension points

All Resume sites (AtFuncStart, AfterYield, AfterAwait, AfterYieldFrom) now pass bytecode::ResumeType::* as u32, which matches the new tuple-style constructor and appears consistent across:

  • function/module entry (enter_scope),
  • async with / async for,
  • yield, await, yield from,
  • generator-based comprehensions.

Every Resume is still paired with the correct preceding Yield*/GetAwaitable/YieldFrom. As long as Resume’s oparg type in Instruction is indeed u32 (or a type alias thereof), this is semantically equivalent.

Also applies to: 2836-2845, 2886-2888, 2922-2929, 4417-4454, 4529-4539, 4993-5003, 5058-5064


1062-1075: CopyItem / Swap depth indices remain aligned with previous stack discipline

You’ve replaced legacy dup/rot instructions with tuple-style CopyItem(depth) and Swap(depth) in several stack-sensitive places:

  • REPL last-expression printing and block expressions,
  • multi-target assignments and named expressions,
  • __classcell__ capture in class bodies,
  • pattern-matching helpers (sequence/mapping/or patterns),
  • chained comparisons and guard handling,
  • augmented assignment for subscripts/attributes.

All these call sites preserve the old constant depths (1_u32, 2_u32, 3, 1 + remaining, etc.), and the surrounding comments/logic still match the expected stack shapes (e.g., copying the subject before pattern attempts, reordering container/slice/result in augassign). I don’t see any depth mismatches or regressions here.

Also applies to: 1087-1098, 1636-1644, 2626-2633, 3163-3175, 3230-3233, 3331-3337, 3492-3551, 3595-3603, 3838-3850, 3947-3954, 4128-4145, 4162-4170, 4648-4651


1305-1391: Import and attribute opcodes (Import*, Load*/Store*/DeleteAttr, LoadMethod) updated cleanly

The various import and attribute-related instructions now use tuple-style constructors:

  • ImportName(idx), ImportFrom(idx), ImportNameless,
  • LoadAttr(idx), StoreAttr(idx), DeleteAttr(idx),
  • LoadMethod(idx).

Each idx is consistently obtained via self.name(...) (a NameIdx), and the surrounding control flow (from-import *, chained attribute loads, delete-attribute, method calls) is unchanged. This is a straightforward enum-shape refactor with no semantic change.

Also applies to: 1721-1735, 4041-4053, 4362-4366, 4742-4751


1408-1463: Control-flow opcodes with label operands correctly leverage the new Label plumbing

All structured control flow now routes label targets through emit!(..., Instruction::<Op>(block_idx)), relying on the new EmitArg<bytecode::Label> for ir::BlockIdx:

  • Jump(...) in if/elif chains, while/for loops, try/except/else/finally, match, and comprehensions,
  • loop control (Break(exit_block), Continue(loop_block), ForIter(else_block)),
  • exception setup (SetupExcept(handler_block), SetupFinally(finally_block)),
  • boolean / guard jumps (PopJumpIfTrue/False, JumpIfFalseOrPop, JumpIfTrueOrPop).

Targets are always the appropriate BlockIdx (e.g., after/else blocks, handler blocks, loop heads/tails), and there’s no sign of miswired labels. This matches the new label-based Instruction design without altering high-level control flow.

Also applies to: 1553-1607, 1958-2061, 2791-2816, 2896-2965, 2999-3015, 3812-3890, 4207-4277, 4281-4303, 4990-5003


1465-1479: Raise and comparison-related opcodes migrated to tuple-style with unchanged semantics

The refactors:

  • Raise(kind) for the three RaiseKind cases,
  • CompareOperation(ComparisonOperator::...) in mapping/sequence patterns and chained comparisons,
  • ContainsOp(Invert::...), IsOp(Invert::...) in comparisons and pattern checks,

are all one-to-one replacements of the prior struct-style variants. The chosen ComparisonOperator values and Invert flags still align with the surrounding logic (>= for mapping-length checks, == for value patterns, ==/>= in chained comparisons, etc.). No behavioural differences are introduced.

Also applies to: 3421-3443, 3720-3739, 3756-3765, 3898-3973


1806-1955: Type-parameter and generic handling matches the new intrinsic and function-attribute opcodes

In the type-params pipeline:

  • compile_type_param_bound_or_default uses UnpackSequence(1) only when allowed and wraps evaluation in a tiny function called via CallFunctionPositional(0), as before.
  • compile_type_params now emits CallIntrinsic1(TypeVar/ParamSpec/TypeVarTuple) and CallIntrinsic2(TypeVarWithBound/Constraint/SetTypeparamDefault) with tuple-style args, plus CopyItem(1) + store_name to bind each type param. The counts passed to BuildTuple(...) are still derived from type_params.len().

For generic functions/classes:

  • compile_function_def and compile_class_def use Swap(2) / Swap(num_typeparam_args + 1) and CallIntrinsic2(SetFunctionTypeParams) to attach type params, and later CallFunctionPositional(num_typeparam_args) to invoke the wrapper closure.
  • In class compilation, CallIntrinsic1(SubscriptGeneric) and the subsequent CallFunctionKeyword/Positional for __build_class__ are updated mechanically to tuple-style with the same argument counts.

Across these sites the stack comments and arg counts still line up with the design; I don’t see any regressions in how generics are threaded through.

Also applies to: 2550-2640, 2656-2760


1958-2061: Try/except/finally, async-with, async-for, and async comprehensions remain structurally equivalent

The conversions to:

  • SetupFinally(finally_block), SetupExcept(handler_block/after_block), Jump(else_block/finally_block), Raise(Reraise) in compile_try_statement,
  • SetupAsyncWith(final_block) / SetupWith(final_block) plus Resume(AfterAwait as u32) in compile_with,
  • async-for’s SetupExcept(else_block), Resume(AfterAwait as u32), and EndAsyncFor,
  • async comprehensions’ SetupExcept(after_block), Resume(AfterAwait as u32), and EndAsyncFor,

all keep the same block structure and stack behaviour used previously, just through tuple-style variants. Exception handling, finally unwinding, and async control flow should behave identically.

Also applies to: 2818-2894, 2896-2965, 4990-5003


3096-3570: Pattern-matching opcodes (Unpack*, Match*, mapping/sequence/or-pattern helpers) are consistently updated

In the structural pattern-matching helpers you’ve migrated to tuple-style opcodes:

  • UnpackEx(args) / UnpackSequence(n) in sequence unpacking and star patterns,
  • MatchClass(nargs), MatchMapping, MatchSequence,
  • mapping patterns’ BuildTuple(size) for keys, BuildMap(0), DictUpdate(2), UnpackSequence(size), and the CopyItem/Swap choreography for deleting keys into DeleteSubscript,
  • BinaryOperation(BinaryOperator::Subtract) for index arithmetic,
  • various CopyItem/Swap calls in or-pattern store alignment and sequence subscripting.

The updated instructions mirror the CPython algorithm and the prior Rust implementation; counts are still derived from elts.len(), nargs + n_attrs, or size checks that guard against overflow. I don’t see off-by-one issues or stack mismanagement introduced by the tuple-style transition here.

Also applies to: 3680-3769


3822-3890: match case dispatch and guards use the new jump opcodes correctly

For match statements:

  • Each non-default case now uses CopyItem(1) to duplicate the subject (except the last checked case), compiles the pattern, then jumps with Jump(end) after the case body.
  • Guards use PopJumpIfFalse(pattern_context.fail_pop[0]) to fall back to the failure label, and you still call emit_and_reset_fail_pop between cases.
  • The final default case, when present, uses JumpIfFalseOrPop(end) for its guard.

The tuple-style Jump/PopJumpIfFalse/JumpIfFalseOrPop opcodes target the same blocks as before, and the subject clean-up (Pop) and store binding logic remain intact. Control flow through cases is unchanged.


3898-3973: Chained comparisons and boolean short-circuiting preserve previous control-flow structure

For chained comparisons:

  • CompareOperation(Equal/Less/LessOrEqual/Greater/GreaterOrEqual) is now emitted via the tuple-style variant.
  • The Swap(2) + CopyItem(2) pattern to retain the RHS for the next comparison, and JumpIfFalseOrPop(break_block) for early exit, are unchanged in logic.

For boolean expressions:

  • compile_jump_if’s fallback path now uses PopJumpIfTrue/False(target_block),
  • compile_bool_op uses JumpIfFalseOrPop / JumpIfTrueOrPop with the same block layout (after_block).

Only the constructor syntax changed; the jump conditions, labels, and stack effects remain as before.

Also applies to: 4207-4277, 4281-4303


4350-4366: Expression-level opcodes (unary, slice, yield/await, comprehensions, f-strings) are refactored mechanically

Key expression sites now use tuple-style variants:

  • UnaryOperation(op) after unary exprs, with op unchanged.
  • BuildSlice(step) in slice expressions (bool step flag).
  • YieldValue + Resume(AfterYield/AfterYieldFrom) and GetAwaitable + YieldFrom + Resume(AfterAwait) for yield, yield from, await.
  • Comprehensions’ initializers BuildList/BuildSet/BuildMap(OpArgMarker::marker()) and per-element ops ListAppend(len), SetAdd(len), MapAdd(len) in compile_comprehension.
  • f-strings using FormatValue(conversion) and BuildString(count) for both nested and top-level formatting.

Given the counts and enums are unchanged, these are straightforward enum-shape updates without semantic changes.

Also applies to: 4406-4407, 4417-4454, 4538-4584, 4593-4612, 5340-5341, 5436-5448


4720-4834: Call and keyword handling (CallFunction* / CallMethod* / BuildMap*) align with the new API

In the call path:

  • compile_call now uses LoadMethod(idx) for method calls, then dispatches through compile_method_call vs compile_normal_call.
  • compile_normal_call and compile_method_call cleanly map CallType::{Positional,Keyword,Ex} to CallFunctionPositional/Keyword/Ex and their method counterparts.
  • compile_call_inner uses BuildTupleFromTuples(size) vs BuildTuple(size) for positional args, and CallFunctionKeyword(count) when keyword names are packed into a const tuple.
  • compile_keywords emits BuildMap(sub_size) for each group of named keywords and BuildMapForCall(size) when multiple mapping segments are to be merged.

All argument counts are derived from arguments.len(), size, or keyword group sizes as before, so the switch to tuple-style opcodes should not affect call semantics.


5130-5136: Const-return peephole still works with marker-based LoadConst / ReturnConst

emit_load_const / emit_return_const now create LoadConst(idx) / ReturnConst(idx) using emit_arg, and emit_return_value rewrites a terminal LoadConst(idx) into ReturnConst(idx) by pattern-matching on the new tuple-style variant.

Because the OpArg payload is untouched when you mutate inst.instr, the encoded constant index is preserved. This keeps the peephole optimization intact under the new Instruction representation.

Also applies to: 5140-5143