Message259582
| Author | brett.cannon |
|---|---|
| Recipients | brett.cannon, florin.papa, pitrou, serhiy.storchaka, skrah, vstinner, yselivanov, zbyrne |
| Date | 2016-02-04.17:47:37 |
| SpamBayes Score | -1.0 |
| Marked as misclassified | Yes |
| Message-id | <1454608057.77.0.557218400017.issue26275@psf.upfronthosting.co.za> |
| In-reply-to |
| Content | |
|---|---|
What would happen if we shifted to counting the number of executions within a set amount of time instead of how fast a single execution occurred? I believe some JavaScript benchmarks started to do this about a decade ago when they realized CPUs were getting so fast that older benchmarks were completing too quickly to be reliably measured. This also would allow one to have a very strong notion of how long a benchmark run would take based on the number of iterations and what time length bucket a benchmark was placed in (i.e., for microbenchmarks we could say a second while for longer running benchmarks we can increase that threshold). And it won't hurt benchmark comparisons since we have always done relative comparisons rather than absolute ones. |
|
| History | |||
|---|---|---|---|
| Date | User | Action | Args |
| 2016-02-04 17:47:37 | brett.cannon | set | recipients: + brett.cannon, pitrou, vstinner, skrah, serhiy.storchaka, yselivanov, zbyrne, florin.papa |
| 2016-02-04 17:47:37 | brett.cannon | set | messageid: <1454608057.77.0.557218400017.issue26275@psf.upfronthosting.co.za> |
| 2016-02-04 17:47:37 | brett.cannon | link | issue26275 messages |
| 2016-02-04 17:47:37 | brett.cannon | create | |