[Python-ideas] Proposal for default character representation
Greg Ewing
greg.ewing at canterbury.ac.nz
Thu Oct 13 05:24:51 EDT 2016
More information about the Python-ideas mailing list
Thu Oct 13 05:24:51 EDT 2016
- Previous message (by thread): [Python-ideas] Proposal for default character representation
- Next message (by thread): [Python-ideas] Proposal for default character representation
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
Mikhail V wrote: > But: you claim to see bit patterns in hex numbers? Then I bet you will > see them much better if you take binary notation (2 symbols) or quaternary > notation (4 symbols), I guarantee. Nope. The meaning of 0xC001 is much clearer to me than 1100000000000001, because I'd have to count the bits very carefully in the latter to distinguish it from, e.g. 6001 or 18001. The bits could be spaced out: 1100 0000 0000 0001 but that just takes up even more room to no good effect. I don't find it any faster to read -- if anything, it's slower, because my eyes have to travel further to see the whole thing. Another point -- a string of hex digits is much easier for me to *remember* if I'm transliterating it from one place to another. Not only because it's shorter, but because I can pronounce it. "Cee zero zero one" is a lot easier to keep in my head than "one one zero zero zero zero zero zero zero zero zero zero zero zero zero one"... by the time I get to the end, I've already forgotten how it started! > And if you take consistent glyph set for them > also you'll see them twice better, also guarantee 100%. When I say "instantly", I really do mean *instantly*. I fail to see how a different glyph set could reduce the recognition time to less than zero. -- Greg
- Previous message (by thread): [Python-ideas] Proposal for default character representation
- Next message (by thread): [Python-ideas] Proposal for default character representation
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Python-ideas mailing list