Message278907
| Author | vstinner |
|---|---|
| Recipients | brett.cannon, lemburg, ned.deily, pitrou, python-dev, rhettinger, serhiy.storchaka, steven.daprano, tim.peters, vstinner |
| Date | 2016-10-18.16:30:11 |
| SpamBayes Score | -1.0 |
| Marked as misclassified | Yes |
| Message-id | <1476808211.14.0.0541047715479.issue28240@psf.upfronthosting.co.za> |
| In-reply-to |
| Content | |
|---|---|
I'm disappointed by the discussion on minumum vs average. Using the perf module (python3 -m perf timeit), it's very easy to show that the average is more reliable than the minimum. The perf module runs 20 worker processes by default: with so many processes, it's easy to see that each process has a different timing because of random address space layout and the randomized Python hash function. Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2016-09-21 15:17 Serhiy: "This makes hard to compare results with older Python versions." Serhiy is right. I see two options: display average _and_ minimum (which can be confusing for users!) or display the same warning than PyPy: "WARNING: timeit is a very unreliable tool. use perf or something else for real measurements" But since I'm grumpy now, I will now just close the issue :-) I pushed enough changes to timeit for today ;-) |
|
| History | |||
|---|---|---|---|
| Date | User | Action | Args |
| 2016-10-18 16:30:11 | vstinner | set | recipients: + vstinner, lemburg, tim.peters, brett.cannon, rhettinger, pitrou, ned.deily, steven.daprano, python-dev, serhiy.storchaka |
| 2016-10-18 16:30:11 | vstinner | set | messageid: <1476808211.14.0.0541047715479.issue28240@psf.upfronthosting.co.za> |
| 2016-10-18 16:30:11 | vstinner | link | issue28240 messages |
| 2016-10-18 16:30:11 | vstinner | create | |