???? Hardware-accellerated python ????
Alex Martelli
alex at magenta.com
Thu Aug 10 07:04:18 EDT 2000
More information about the Python-list mailing list
Thu Aug 10 07:04:18 EDT 2000
- Previous message (by thread): ???? Hardware-accellerated python ????
- Next message (by thread): Can I do this faster?
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
"Armin Steinhoff" <Armin at Steinhoff_de> wrote in message news:8ms9du$c3b at drn.newsguy.com... [snip] > >> You are wrong ... tokens processed as machine code ist just faster. [snip] > Right so far ... but the term 'general purpose CPU' has been changed since we > have the TRANSMETA CPUs. > > Could not a TRANSMETA CPU a candidat for implementing a 'special purpose CPU' > for processing interpreter tokens ?? ??? The Transmeta Crusoe chips are just VLIW (an interesting approach to CPU design that's been around for a few years but hasn't managed to score any real success so far) embellished by patented 'code-morphing' software that lets them (allegedly) execute Pentium instruction streams pretty fast (claiming a better power-consumption/performance ratio than Intel's and other Intel competitors). In other words, far from being "special-purpose", they're quite general-purpose CPU's with a (possibly) fast software approach to emulating Intel's widespread architecture. It is, of course, quite possible that, if you are running on a Crusoe that is mostly emulating a Pentium (no matter how well it emulates it), then, if you also have a way to code directly to the Crusoe level, then your software can get better performance than when compiled/assembled to Pentium machine code. As to whether Transmeta's patented technology would let you code a better/faster interpreter/JITter than other well-known algorithms, I guess that is possible -- I have not read their patents -- but I have my doubts. The problem they're addressing is interpreting/JITting an encoding that was never designed to be (the Pentium machine-code encoding) and it would be truly a lucky accident if those presumably-clever algorithms happened to help on the very different problem of optimally interpreting/JITting an encoding that WAS purposefully designed for best possible interpretation with classical techniques. In any case, optimal theoretical performance would be obtained by hand-coding (in their native VLIW machine-code -- which may bear some family-resemblance to horizontal-microcoding, I guess) whatever algorithms work best for the bytecode/hardware combination. Whether any performance gains thus obtained would be worth the huge bother is a different issue. In any case, by no stretch of the imagination could the resulting interpreter be called a 'special-purpose CPU'; I would hope that any marketeer trying to come up with that particular spin would be laughed out of court by his peers before he can make a laughing-stock of his company by going public with it. Of course, judging from past performance, anything can happen if you let a creative marketeer loose with some half-chewed concepts and an inexhaustible appetite for appealing buzzwords. But, at least between us techies, I hope we can avoid such excesses...! Alex
- Previous message (by thread): ???? Hardware-accellerated python ????
- Next message (by thread): Can I do this faster?
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Python-list mailing list