[PEP draft 2] Adding new math operators
Tim Hochberg
tim.hochberg at ieee.org
Wed Aug 9 19:01:45 EDT 2000
More information about the Python-list mailing list
Wed Aug 9 19:01:45 EDT 2000
- Previous message (by thread): [PEP draft 2] Adding new math operators
- Next message (by thread): [PEP draft 2] Adding new math operators
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
hzhu at localhost.localdomain (Huaiyu Zhu) writes: > On Wed, 09 Aug 2000 21:24:37 GMT, Tim Hochberg <tim.hochberg at ieee.org> wrote: > >2a) If either c or d is a scalar (python float/int/complex or rank 0 array), > > it is converted to the type of the other operand without complaint. > > How are you supposed to implement that? This is 55% of the issue (IMO). > For example, are you allowing x.E if x happens to be 3? No. There are two cases here. The first is the literal case. So for example (3*X) will attempt to perform an elementwise or matrixwise multiplication depending on the type (array/matrix) of X. Note that I'm not sure what a scalar matrixwise multiply means, it may be exactly the same as an elementwise multiply(?). The second case is the case of an unknown anonymous variable that happens to be a scalar. If one knows in advance that the input will be scalar there is no problem, the problem comes up when the input to a function could be either a array/matrix or a scalar. So here is the problem case illustrated. def elementandmatrixwise_multiply1(a, b, c): return (a.M * b.M).E * c.E # This fails if 'a', 'b', 'c' is a int/float/complex I don't really think that adding .E, etc. to the basic number types is really an option, so the way to fix this is to check the input types at the function boundary as I suggested previously. def elementandmatrixwise_multiply2(a, b, c): a, b, c = asarray(a, Float), asarray(b, Float), asarray(c, Float) return (a.M*b.M) * c The equivalent ~ notation function would be: def elementandmatrixwise_multiply3(a, b, c): return (a ~* b) * c # Assuming in Array environment. Option 2 admittedly looks a little heavier than the other two, however it is safer than either of the others. Also, this is an artificially short example. In a typical function the extra line is not going to be such a large fraction of the function. Not that > >In addition, the additional operator approach only help here if the > >sense of the operators is the same for both MatPy and NumPy. Which > >means that, in effect, ~X would be matrixeise for both packages. I > >admit to having lost track of this thread for a while, but last I > >heard the wish was for ~X to be elementwise within MatPy. > > Not like that. If the difference is in operators, then you can stick with > one flavor of objects throughout a module, and use ~op for occasional > operation of the other flavor. Yes, but if you have calls from outside the module all bets are off. I suppose it would be OK to assume all objects are of a given type in a function if that function is only called from within the module, but modules that are called from outside should generally have their arguments checked and adjusted. However, given that you know what the input types are going to be inside a module, the differences between the .E and the ~* notation only come up in the scalar case discussed above. The ~* notation does appear to have an advantage in this case. For me, the advantages of checking things at the module boundaries outweigh the extra line that's required, so I don't see this as a big problem. In addition I still really don't like the fact that ~* and * could mean opposite things depending on whether an object was an array or a matrix. While this may rarely be a problem, when it is a problem it could be confusing and hard to track down. > On the other hand, if the difference is in operands, you can't be sure the > flavor of the objects in any big chunk of code if both operations exist, > unless you set up a convention to always cast back to a given flavor after > each operation. That's what I think was the reason someone came up with the > "non-persistent-type" or "shadow" classes approach. This is 40% of the > issue (IMO). I understand the urge for shadow type operands, but I think they introduce more problems than they solve. The rules for straightforward conversions are, well, straightforward. I suspect that shadow type operands would be either confusing, hard to implement fully, or both. I addition, I think that disallowing mixed type operations would catch the vast majority of the errors that could creep in through when forgetting to cast back and forth. > Personal experience: during the initial development of MatPy I once tried to > mix two flavors of objects in one piece of code (because I hadn't wrote > wrappers for all the functions needed) and it was a nightmare. My > impression is that separation at module level is going to work. Separation > at function (method) level is the extreme. Mixing these within a function > is just a call for disaster. The test here is looking at code. People need to start coughing up real code that uses mixed type operations and start seeing whether real functions look like in both notations. The only matrix operation that I use with any regularity is dot, so mine would be pretty boring. Anyone have any good examples? -tim
- Previous message (by thread): [PEP draft 2] Adding new math operators
- Next message (by thread): [PEP draft 2] Adding new math operators
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Python-list mailing list