Message259551
| Author | skrah |
|---|---|
| Recipients | brett.cannon, florin.papa, pitrou, serhiy.storchaka, skrah, vstinner, yselivanov, zbyrne |
| Date | 2016-02-04.09:37:13 |
| SpamBayes Score | -1.0 |
| Marked as misclassified | Yes |
| Message-id | <1454578633.63.0.108556723525.issue26275@psf.upfronthosting.co.za> |
| In-reply-to |
| Content | |
|---|---|
Core 1 fluctuates even more (My machine only has 2 cores): $ taskset -c 1 ./python telco.py full Control totals: Actual ['1004737.58', '57628.30', '25042.17'] Expected ['1004737.58', '57628.30', '25042.17'] Elapsed time: 6.783009 Control totals: Actual ['1004737.58', '57628.30', '25042.17'] Expected ['1004737.58', '57628.30', '25042.17'] Elapsed time: 7.335563 $ taskset -c 1 ./python telco.py full I have some of the same concerns as Serhiy. There's a lot of statistics going on in the benchmark suite -- is it really possible to separate that cleanly from the actual runtime of the benchmarks? |
|
| History | |||
|---|---|---|---|
| Date | User | Action | Args |
| 2016-02-04 09:37:13 | skrah | set | recipients: + skrah, brett.cannon, pitrou, vstinner, serhiy.storchaka, yselivanov, zbyrne, florin.papa |
| 2016-02-04 09:37:13 | skrah | set | messageid: <1454578633.63.0.108556723525.issue26275@psf.upfronthosting.co.za> |
| 2016-02-04 09:37:13 | skrah | link | issue26275 messages |
| 2016-02-04 09:37:13 | skrah | create | |