Python's simplicity philosophy
Alex Martelli
aleax at aleax.it
Fri Nov 14 12:02:25 EST 2003
More information about the Python-list mailing list
Fri Nov 14 12:02:25 EST 2003
- Previous message (by thread): Python's simplicity philosophy
- Next message (by thread): Too much builtins (was Re: Python's simplicity philosophy
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
Douglas Alan wrote: ... >> That is non-trivial to most, based on my experience in explaining it >> to other people (which for the most part have been computational >> physicists, chemists, and biologists). > > I find this truly hard to believe. APL was a favorite among > physicists who worked at John's Hopkins Applied Physics Laboratory > where I lived for a year when I was in high school, and you wouldn't Interesting. I worked for many years in an environment where physicists doing research could freely choose between APL and Fortran (IBM Research), and while there was a hard-core of maybe 10%-15% of them who'd _never_ leave APL for any reason whatsoever, an overwhelmingly larger number of physicist was at least as keen on Fortran. I have no hard data on the subject, but it appears to me that Fortran has always been way more popular than APL among physicists as a whole. > thing. In fact, people seemed to like reduce() and friends -- people > seemed to think it was a much more fun way to program, rather than > using boring ol' loops. ...while most physicists I worked with were adamant that they wanted to continue coding loops and have the _compiler_ vectorize them or parallelize them or whatever. Even getting them to switch to Linpack etc from SOME of those loops was a battle at first, though as I recall the many advantages did eventually win them over. Anyway, computational scientists using Python should be using Numeric (if they aren't, they're sadly misguided). Numeric's ufuncs ARE the right way to do the equivalent of APL's +/ (which is quite a different beast than ANYusercodedfunction/ would be...), and show clear and obvious advantages in so doing: [alex at lancelot tmp]$ timeit.py -c -s'import operator; xs=range(999)' 'x=reduce(operator.add, xs)' 1000 loops, best of 3: 290 usec per loop [alex at lancelot tmp]$ timeit.py -c -s'xs=range(999)' 's=sum(xs)' 10000 loops, best of 3: 114 usec per loop [alex at lancelot tmp]$ timeit.py -c -s'import Numeric as N; xs=N.arrayrange(999)' 'x=N.add.reduce(xs)' 100000 loops, best of 3: 9.3 usec per loop Now *THAT* is more like it: 10+ times FASTER than sum, rather than 2+ times SLOWER! Of course, you do have to use it right: in this snippet, if you initialize xs wrongly...: [alex at lancelot tmp]$ timeit.py -c -s'import Numeric as N; xs=range(999)' 'x=N.add.reduce(xs)' 100 loops, best of 3: 2.1e+03 usec per loop ...then you can say goodbye to performance, as you see. But when used skilfully, Numeric (or its successor numarray, I'm sure -- I just don't have real experience with the latter yet) is just what numerically heavy computations in Python require. Alex
- Previous message (by thread): Python's simplicity philosophy
- Next message (by thread): Too much builtins (was Re: Python's simplicity philosophy
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Python-list mailing list