perf: improve analysis performance by 95% for `py_binary` and `py_test` rules by tobyh-canva · Pull Request #3380 · bazel-contrib/rules_python

gemini-code-assist[bot]

mattem

@tobyh-canva

github-merge-queue bot pushed a commit that referenced this pull request

Nov 10, 2025
…nalysis phase performance (#3381)

When py_binary/py_test were being built, they were flattening the
runfiles
depsets at analysis time in order to create the zip file mapping
manifest for
their implicit zipapp outputs. This flattening was necessary because
they had
to filter out the original main executable from the runfiles that didn't
belong
in the zipapp. This flattening is expensive for large builds, in some
cases
adding over 400 seconds of time and significant memory overhead.

To fix, have the zip file manifest use the `runfiles_with_exe` object,
which is
the runfiles, but pre-filtered for the files zip building doesn't want.
This
then allows passing the depsets directly to `Args.add_all` and using
map_each
to transform them.

Additionally, pass `runfiles.empty_filenames` using a lambda. Accessing
that
attribute implicitly flattens the runfiles.

Finally, because the original profiles indicated `str.format()` was a
non-trivial
amount of time (46 seconds / 15% of build time), switch to using `+`
instead.

This is a more incremental alternative to #3380 which achieves _most_ of
the
same optimization with only Starlark changes, as opposed to introducing
an
external script written in C++.

[Profile of a large
build](https://github.com/user-attachments/assets/e90ae699-a04d-44df-b53c-1156aa890af5),
which shows a Starlark CPU profile. It shows an overall build
time of 305 seconds. 46 seconds (15%) are spent in `map_zip_runfiles`,
half of which
is in `str.startswith()` and the other half in `str.format()`.

---------

Co-authored-by: Richard Levasseur <rlevasseur@google.com>

aignas added a commit to aignas/rules_python that referenced this pull request

Dec 6, 2025
Looking at the investigation in bazel-contrib#3381, it seems that we are calling
the startswith many times and I wanted to see if it would be possible
to optimize how it is done.

I also realized that no matter what target we have, we will be calling
the function once with a `__init__.py` path and we can inline this case
as a separate if statement checking for equality instead, which Starlark
optimizer should understand better.

Before this PR for every executable target we would go through the
`legacy_external_runfiles and "__init__.py".startswith("external")` and
this PR eliminates this.

Related to bazel-contrib#3380 and bazel-contrib#3381

github-merge-queue bot pushed a commit that referenced this pull request

Dec 7, 2025
)

Looking at the investigation in #3380, it seems that we are calling
the startswith many times and I wanted to see if it would be possible
to optimize how it is done.

I also realized that no matter what target we have, we will be calling
the function once with a `__init__.py` path and we can inline this case
as a separate if statement checking for equality instead, which Starlark
optimizer should understand better.

Before this PR for every executable target we would go through the
`legacy_external_runfiles and "__init__.py".startswith("external")` and
this PR eliminates this.

Related to #3380 and #3381