Enh/bootstrapping for ci estimation by Marchma0 · Pull Request #897 · RocketPy-Team/RocketPy
Pull request type
- Code changes (bugfix, features)
- Code maintenance (refactoring, formatting, tests)
- ReadMe, Docs and GitHub updates
Checklist
- Tests for the changes have been added (if needed)
- Lint (
black rocketpy/ tests/) has passed locally - All tests (
pytest tests -m slow --runslow) have passed locally -
CHANGELOG.mdhas been updated (if relevant)
Current behavior
Currently, the MonteCarlo class allows for running simulations and storing results, but it lacks built-in statistical tools to assess the reliability of these results. There is no native method to calculate confidence intervals, forcing users to manually extract data and perform external calculations to verify simulation convergence (e.g., to ensure the mean apogee has stabilized).
New behavior
This PR implements the estimate_confidence_interval method within the MonteCarlo class, using bootstrapping (via scipy.stats.bootstrap) to calculate confidence intervals for any result attribute (e.g., apogee, max_velocity). This enables users to directly quantify simulation uncertainty and determine if the iteration count is sufficient. Documentation has been updated to explain CI interpretation, and unit tests have been added to validate the calculations.
Breaking change
- No
Additional information
I haven’t included the documentation yet, and I’m not entirely sure where it should go (in a notebook or a new file). Could you please indicate the appropriate place for it