Work-in-Progress on PL/Java refactoring, API modernization by jcflack · Pull Request #399 · tada/pljava
Tweak invocation.c so the stack-allocated space provided by the caller is used to save the prior state rather than to construct the new state. This way, the current state can have a fixed address (currentInvocation is a constant pointer) and can be covered by a single static ByteBuffer that Invocation.java can read/write through without relying on JNI methods. As Invocation isn't a JDBC-specific concept or class, it has never made much sense to have it in the .jdbc package. Move it to .internal.
Both values have just been stashed by stashCallContext. Both will be restored 14 lines later by _closeIteration. And nothing in those 14 lines cares about them.
After surveying the code for where function return values can
be constructed, add one switchToUpperContext() around the construction
of non-composite SRF return values, where it was missing, so such values
can be returned correctly after SPI_finish(), and so the former,
very hacky, cross-invocation retention of SPI contexts can be sent
to pasture.
For the record, these are the notes from that survey of the code:
Function results, non-set-returning:
Type_invoke:
the inherited _Type_invoke calls ->coerceObject, within sTUC.
sub"class"es that override it:
Boolean,Byte,Double,Float,Integer,Long,Short,Void:
- overridden in order to use appropriately-typed JNI invoke method
- Double,Float,Long have _asDatum that does sTUC;
. historical artifact; those types were !byval before PG 8.4
- the rest do not sTUC; should be ok, all byval
Coerce: does sTUC
Composite: does sTUC around _getTupleAndClear
Arrays:
createArrayType (extern, in Array.c) does sTUC. So far so good.
What about !byval elements stored into the array?
the non-primitive/any types don't override _Array_coerceObject,
which is where Type_coerceObject on each element, and construct_md_array
are called. With no sTUC. Around construct_md_array is really where it's
needed.
But then, _Array_coerceObject is still being called within sTUC
of _Type_invoke. All good.
Hmm: !byval elements of values[] are leaked when pfree(values) happens.
They should be pfree'd unconditionally; construct_md_array copies them.
What about UDTs?
They don't override _Type_invoke.
So they inherit the one that calls ->coerceObject, within sTUC.
That ought to be enough. UDT.c's coerceScalarObject itself also sTUCs,
inconsistently, for fixed-length and varlena types but not NUL-terminated.
That should be ok, and merely redundant. In coerceTupleObject, no sTUC
appears. Again, by inheritance of coerceObject, that should be ok.
Absent that, sTUC around the SQLOutputToTuple_getTuple should be adequate;
only if that could produce a tuple with TOAST pointers would it also be
necessary around the HeapTupleGetDatum.
Function results, set-returning:
_datumFromSRF is applied to each row result
The inherited _datumFromSRF calls Type_coerceObject, NOT within sTUC
XXX this, at least, definitely needs a sTUC added.
sub"class"es that override it:
only Composite: calls _getTupleAndClear, NOT within sTUC. But it
works out, just because TupleDesc.java's native _formTuple method uses
JavaMemoryContext. Spooky action at a distance?
Results from triggers:
Function.c's invokeTrigger does sTUC around the getTriggerReturnTuple.
In passing, fix a long-standing thinko in Invocation_popInvocation: the memory context that was current on entry is stored in upperContext of *this* Invocation, but popInvocation was 'restoring' the one that was saved in the *previous* Invocation. Also in passing, move the cleanEnqueuedInstances step later in the pop sequence, improving its chance of seeing instances that could become unreachable through the release of SPI contexts or the JNI local frame.
This can reveal issues with the nesting of SPI 'connections' or management of their associated memory contexts.
Without the special treatment, the instance of the Java class Invocation, if any, that corresponds to the C Invocation, has its lifetime simply bounded to that of the C Invocation, rather than artificially extended across a sequence of SRF value-per-call invocations. It is simpler, does not break any existing tests, and is less likely to be violating PostgreSQL assumptions on correct behavior.
The commits merged here into this branch simplify PL/Java's management of the PostgreSQL-to-PL/Java-function invocation stack, and especially simplify the handling of SPI (PostgreSQL's Server Programming Interface) and set-returning functions. SPI includes "connect" and "finish" operations normally used in a simple pattern: connect before using SPI functions, finish when done and before returning to the caller, and if anything allocated while "connected" is to be returned to the caller, be sure to allocate that in the "upper executor" memory context (that is, the context that was current before SPI_connect). PL/Java has long diverged from that approach, especially for the case of set-returning functions using the value-per-call protocol (the only one PL/Java currently supports). If SPI was connected during one call in the sequence, PL/Java has sought to save and reuse that connection and its memory contexts over later calls (where a simpler, "by the book" implementation would simply SPI_connect and SPI_finish within the individual calls as needed). It never seemed altogether clear that was a good idea, but at the same time there weren't field reports of failure. It turns out, though, not hard to construct tests showing the apparent success was all luck. It has not been much trouble to reorganize that code so that SPI is used in the much simpler, by-the-book fashion. b2094ba fixes one place where a needed switchToUpperContext was missing but the error was masked by the former SPI juggling, and with that fixed, all the tests in the CI script promptly passed, with SPI used in the purely nested way that it expects. One other piece of complexity that has been removed was the handling of Java Invocation objects during set-returning functions. Although the stack-allocated C invocation struct naturally lasts only through one actual call, PL/Java's SRF code took pains to keep its Java counterpart alive, as if the one instance represented the entire sequence of actual calls while returning a set. Eliminating that behavior has simplified the code and shown no adverse effect in the available tests. As these are changes of some significance that might possibly alter some behavior not tested here, they have not been made in the 1.6 or 1.5 branches. But the simplification seems to make a less brittle base for the development going forward on this branch.
CacheMap is a generic class useful for (possibly weak or soft) canonicalizing caches of things that are identified by one or more primitive values. (Writing the key values into a ByteBuffer avoids the allocation involved in boxing them; however, the API as it currently stands might be exceeding that cost with instantiation of lambdas. It should eventually be profiled, and possibly revised into a less tidy, but more efficient, form.) SwitchPointCache is intended for lazily caching numerous values of diverse types, groups of which can be associated with a single SwitchPoint for purposes of invalidation. As currently structured, the SwitchPoints (and their dependent GuardWithTest nodes) do not get stored in static final fields; this may limit HotSpot's ability to optimize them as fully as it could if they did.
Adapter is the abstract ancestor of all classes that implement PostgreSQL datatypes for PL/Java, and the adt.spi package contains classes that will be of use to datatype-implementing code: in particular, Datum. PostgreSQL datums are only exposed to Adapters, and the Adapter's job is to reliably convert between the PostgreSQL type and some appropriate Java representation. For some datatypes, there is a single or obvious appropriate Java representation, and an Adapter may be provided that simply produces that. For other datatypes, there may be no single obvious choice of Java representation, either because there is no good match or because there are several; an application might want to map types to specialized classes available in some domain-specific library. To serve those cases, Adapters can be defined in terms of Adapter.Contract subinterfaces, which are simply functional interfaces that document and expose the semantic components of the PostgreSQL type. For example, a contract for PostgreSQL INTERVAL would expose a 64-bit microseconds component, a 32-bit day count, and a 32-bit month count. The division of responsibility is that the Adapter encapsulates how to extract those components given a PostgreSQL datum, but the contract fixes the semantics of what the components are. It is then simple to use the Adapter, with any lambda that conforms to the contract, to produce any desired Java representation of the type. Dummy versions of Attribute, RegClass, RegType, TupleDescriptor, and TupleTableSlot break ground here on the model package, which will consist of a set of classes modeling key PostgreSQL abstractions and a useful subset of the PostgreSQL system catalogs. RegType also implements java.sql.SQLType, making it usable in (a suitable implementation of) JDBC to specify PostgreSQL types precisely. adt.spi.AbstractType needs the specialization() method that was earlier added to internal.Function in anticipation of needing it someday.
The org.postgresql.pljava.adt package contains 'contracts' (subinterfaces of Adapter.Contract.Scalar or Adapter.Contract.Array), which are functional interfaces that document and expose the exact semantic components of PostgreSQL data types. Adapters are responsible for the internal details of PostgreSQL's representation that aren't semantically important, and code that simply needs to construct some semantically faithful representation of the type only needs to be concerned with the contract.
CharsetEncoding is not really a catalog object (the available encodings in PostgreSQL are hardcoded) but is exposed here as a similar kind of object with useful operations, including encoding and decoding using the corresponding Java codec when known. CatalogObject is, of course, the superinterface of all things that really are catalog objects (identified by a classId, an objectId, and rarely a subId). This commit brings in RegNamespace and RegRole as needed for CatalogObject.Namespaced and CatalogObject.Owned. RolePrincipal is a bridge between a RegRole and Java's Principal interface. CatalogObject.Factory is a service interface 'used' by the API module, and will be 'provided' by the internals module to supply the implementations of these things.
And convert other code to use CharsetEncoding.SERVER_ENCODING where earlier hacks were used, like the implServerCharset() added to Session in 1.5.1. In passing, fix a bit of overlooked java7ification in SQLXMLImpl. The new CharsetEncodings example provides two functions: SELECT * FROM javatest.charsets(); returns a table of the available PostgreSQL encodings, and what Java encodings they could be matched up with. SELECT * FROM javatest.java_charsets(try_aliases); returns the table of all available Java charsets and the PostgreSQL ones they could be matched up with, where the boolean try_aliases indicates whether to try Java's known aliases for a charset when nothing in PostgreSQL matched its canonical name. False matches happen when try_aliases is true, so that's not a great idea.
These PostgreSQL notions will have to be available to Java code for two reasons. First, even code that has no business poking at them can still need to know which one is current, to set an appropriate lifetime on a Java object that corresponds to something in PostgreSQL allocated in that context or registered to that owner. For that purpose, they both will be exposed as subtypes of Lifespan, and the existing PL/Java DualState class will be reworked to accept any Lifespan to bound the validity of the native state. Second, Adapter code could very well need to poke at such objects (MemoryContexts, anyway): either to make a selected one current for when allocating some object, or even to create and manage one. Methods for that will not be exposed on MemoryContext or ResourceOwner proper, but could be protected methods of Adapter, so that only an Adapter can use them.
In addition to MemoryContextImpl and ResourceOwnerImpl proper, this step will require reworking DualState so state lives are bounded by Lifespan instances instead of arbitrary pointer values. Invocation will be made into yet another subtype of Lifespan, appropriate for the life of an object passed by PostgreSQL in a call and presumed good while the call is in progress. The DualState change will have to be rototilled through all of its clients. That will take the next several commits. The DualState.Key requirement that was introduced in 1.5.1 as a way to force DualState-guarded objects to be constructed only in upcalls from C (as a hedge against Java code inadvertently doing it on the wrong thread) will go away. We *want* Adapters to be able to easily construct things without leaving Java. Just don't do it on the wrong thread.
The current invocation can be the right Lifespan to specify for a DualState that's guarding some object PostgreSQL passed in to the call, which is expected to be good for as long as the call is in progress. In other, but related, news, Invocation can now return the "upper executor" memory context: that is, whatever context was current at entry, even if a later use of SPI changes the context that is current. It can appear tempting to eliminate the special treatment of PgSavepoint in Invocation, and simply make it another DualState client, but because of the strict nesting imposed on savepoints, keeping just the one reference to the first one set suffices, and is more efficient.
Simplify these: their C callers were passing unconditional null as the ResourceOwner before, which their Java constructors passed along unchanged. Now just have the Java constructor pass null as the Lifespan.
These DualState clients were previously passing the address of the current invocation struct as their "resource owner", again from the C code, passed along by the Java constructor. Again simplify to call Invocation.current() right in the Java constructor and use that as the Lifespan. On a side note, the legacy Relation class included here (and its legacy Tuple and TupleDesc) will naturally be among the first candidates for retirement when this new model API is ready.
This legacy Portal class is called from C and passed the address of the PostgreSQL ResourceOwner associated with the Portal itself.
This is only an intermediate refactoring of VarlenaWrapper. Construction of one is still set in motion from C. Ultimately, it should implement Datum and be something that a Datum.Accessor can construct with a minimum of fuss.
Originally a hedge against coding mistakes during the introduction of DualState for 1.5.1 (which had to support Java < 9), it is less necessary now that the internals are behind JPMS encapsulation, and the former checks for the cookie can be replaced with assertions that the action is happening on the right thread. The CI tests run with assertions enabled, so this should be adequate.
The commits grouped under this merge add API to expose in Java the PostgreSQL notions of MemoryContext and ResourceOwner, and then rework PL/Java's DualState class (which manages objects that combine some Java state and some native state, and may need specified actions to occur if the Java state becomes unreachable or explicitly released or if a lifespan bounding the native state expires). A DualState now accepts a Lifespan, of which MemoryContext and ResourceOwner are both subtypes. So is Invocation, an obvious lifespan for things PostgreSQL passes in that are expected to be valid for the duration of the call. The remaining commits in this group propagate the changes through the affected legacy code.
Fitting it into the new scheme is not entirely completed here; for example, newReadable takes a Datum.Input parameter, but still casts it internally to VarlenaWrapper.Input. Making it interoperate with any Datum.Input may be a bit more work. Likewise, newReadable with synthetic=true still encapsulates all the knowledge of what datatypes there is synthetic-XML coverage for and selecting the right VarlenaXMLRenderer for it (there's that varlena-specificity again!). More of that should be moved out of here and into an Adapter. In passing, fix a couple typos in toString() methods, and add a serviceable, if brute-force, getString() method to Synthetic. It would be better for SyntheticXMLReader to gain the ability to produce character-stream output efficiently, but until that happens, there needs to be something for those moments when you just want a string to look at and shouldn't have to fuss to get it. For now, VarlenaWrapper.Input and .Stream still extend, and add small features like toString(Object) to, DatumImpl. Later work can probably migrate those bits so VarlenaWrapper will only contain logic specific to varlenas. An adt.spi interface Verifier is added, though Datum doesn't yet expose any way to use it; in this commit, only one method accepting Verifier.OfStream is added in DatumImpl.Input.Stream, the minimal change needed to get things working.
As before, JNI methods for this 'model' framework continue to be grouped together in ModelUtils.c; their total number and complexity is expected to be low enough for that to be practical, and then they can all be seen in one place. RegClassImpl and RegTypeImpl acquire m_tupDescHolder arrays in this commit, without much explanation; that will come a few commits later.
There are two flavors so far, Deformed and Heap. Deformed works with whatever a real PostgreSQL TupleTableSlot can work with, relying on the PostgreSQL implementation to 'deform' it into separate datum and isnull arrays. (That doesn't have to be a PostgreSQL 'virtual' TupleTableSlot; it can do the deforming independently of the type of slot. When the time comes to implement the reverse direction and produce tuples, a virtual slot will be the way to go for that, using the PostgreSQL C code to 'form' it once populated.) The Heap flavor knows enough about that PostgreSQL tuple format to 'deform' it in Java without the JNI calls (except where some out-of-line value has to be mapped, or for varlena values until VarlenaWrapper sheds more of its remaining JNI-centricity). The Heap implementation does not yet do anything clever to memoize the offsets into the tuple, which makes the retrieval of all the tuple's values an O(n^2) proposition; there is a low-hanging-fruit optimization opportunity there. For now, it gets the job done. It might be interesting to see how the two flavors compare on typical heap tuples: Deformed, making more JNI calls but relying on PostgreSQL's fast native deforming, or Heap, which can avoid more JNI calls, and also avoids deforming something into a fresh native memory allocation if the only thing it will be used for is to immediately construct some Java object. The Heap flavor can do one thing the Deformed flavor definitely cannot: it can operate on heap-tuple-formatted contents of an arbitrary Java byte buffer, which in theory might not even be backed by native memory. (Again, for now, this is slightly science fiction where varlena values are concerned, because VarlenaWrapper retains a lot of its native dependencies. A ByteBuffer "heap tuple" with varlenas in it will have to be native-backed for now.) The selection of the DualState guard by heapTupleGetLightSlot() is currently more hardcoded than that would suggest; it assumes the buffer is mapping memory that can be heap_free_tuple'd. The 'light' in heapTupleGetLightSlot really means that there isn't an underlying PostgreSQL TupleTableSlot constructed. The whole business of how to apply and use DualState guards on these things still needs more attention. There is also Heap.Indexed, which is the thing needed for arrays. When the element type is fixed-length, it achieves O(1) access (plus null-bitmap processing if there are nulls). It uses a "count preceding null bits ahead of time" strategy that could also easily be adopted in Heap. A NullableDatum flavor is also needed, which would be the thing for mapping (as one prominent example) function-call arguments. The HeapTuples8 and HeapTuples4 classes at the end are scaffolding and ought to be factored out into something with a decent API, as hinted at in the comment preceding them. A Heap instance still inherits the values/nulls array fields used in the deformed case, without (at present) making any use of them. It is possible some use could be made (as, again, an underlying PG TupleTableSlot could be used in deforming a heap tuple), but it's also possible that won't ever be needed, and the class could be refactored to a simpler form.
Here's how this is going to work. The "exists because mentioned" aspect of a CatalogObject is a lightweight operation, just caching/returning a singleton with the mentioned values of classId/objId/(subId?). For a bare CatalogObject (objId unaccompanied by classId), that's all there is. But for any CatalogObject.Addressed subtype, the classId and objId together identify a tuple in a particular system catalog (or, that is, identify a tuple that could exist in that catalog). And the methods on the Java class that return information about the object get the information by fetching attributes from that tuple, then constructing whatever the Java representation will be. Not to duplicate the work of fetching (the tuple itself, and then an attribute from the tuple) and constructing the Java result, an instance will have an array of SwitchPointCache-managed "slots" that will cache, lazily, the constructed results. Five of those slots have their indices standardized right here in CatalogObjectImpl, to account for the name, namespace, owner, and ACL of objects that have those things. Slot 0 is for the tuple itself. When an uncached value is requested, the "computation method" set up for that slot will execute (always on the PG thread, so it can interact with PostgreSQL with no extra ceremony). Most computation methods will begin by calling cacheTuple() to obtain the tuple itself from slot 0, and then will fetch the wanted attribute from it and construct the result. The computation method for cacheTuple(), in turn, will obtain the tuple if that hasn't happened yet, usually from the PostgreSQL syscache. We copy it to a long-lived memory context where we can keep it until its invalidation. The most common way the cacheTuple is fetched is by a one-argument syscache search by the object's Oid. When that is all that is needed, the Java class need only implement cacheId() to return the number of the PostgreSQL syscache to search in. For exceptional cases (attributes, for example, require a two-argument syscache search), a class should just provide its own cacheTuple computation method. The slots for an object are associated with a Java SwitchPoint, and the mapping from the object to its associated SwitchPoint is a function supplied to the SwitchPointCache.Builder. Some classes, such as RegClass and RegType, will allocate a SwitchPoint per object, and can be selectively invalidated. Otherwise, by default, the s_globalPoint declared here can be used, which will invalidate all values of all slots depending on it.
They are the two CatalogObjects with tupleDescriptor() methods. You can get strictly more tuple descriptors by asking RegType; a RegType.Blessed can give you a tuple descriptor that has been interned in the PostgreSQL typcache and corresponds to nothing in the system catalogs. But whenever a RegType t is an ordinary cataloged composite type or the row type of a cataloged relation, then there is a RegClass c such that c == t.relation() and t == c.type(), and you will get the same tuple descriptor from the tupleDescriptor() method of either c or t. In all but one such case, c delegates to c.type().tupleDescriptor() and lets the RegType do the work, obtaining the descriptor from the PG typcache. The one exception is when the tuple descriptor for pg_class itself is wanted, in which case the RegClass does the work, obtaining the descriptor from the PG relcache, and RegType delegates to it for that one exceptional case. The reason is that RegClass will see the first request for the pg_class tuple descriptor, and before that is available, c.type() can't be evaluated. In either case, whichever class looked it up, a cataloged tuple descriptor is always stored on the RegClass instance, and RegClass will be responsible for its invalidation if the relation is altered. (A RegType.Blessed has its own field for its tuple descriptor, because there is no corresponding RegClass for one of those.) Because of this close connection between RegClass and RegType, the methods RegClass.type() and RegType.relation() use a handshake protocol to ensure that, whenever either method is called, not only does it cache the result, but its counterpart for that result instance caches the reverse result, so the connection can later be traversed in either direction with no need for a lookup by oid. In the static initializer pattern introduced here, the handful of SwitchPointCache slots that are predefined in CatalogObject.Addressed are added to, by starting an int index at Addressed.NSLOTS, incrementing it to initialize additional slot index constants, then using its final value to define a new NSLOTS that shadows the original.
An Attribute is most often obtained from a TupleDescriptor (in this API, that's how it's done), and the TupleDescriptor can supply a version of Attribute's tuple directly; no need to look it up anywhere else. That copy, however, cuts off at ATTRIBUTE_FIXED_PART_SIZE bytes. The most commonly needed attributes of Attribute are found there, but for others beyond that cutoff, the full tuple has to be fetched from the syscache. So AttributeImpl has the normal SLOT_TUPLE slot, used for the rarely-needed full tuple, and also its own SLOT_PARTIALTUPLE, for the truncated version obtained from the containing tuple descriptor. Most computation methods will fetch from the partial one, with the full one referred to only by the ones that need it. It doesn't end there. A few critical Attribute properties, byValue, alignment, length, and type/typmod, are needed to successfully fetch values from a TupleTableSlotImpl.Heap. So Attribute cannot use that API to fetch those values. For those, it must hardcode their actual offsets and sizes in the raw ByteBuffer that the containing tuple descriptor supplies, and fetch them directly. So there is also a SLOT_RAWBUFFER. This may sound more costly in space than it is. The raw buffer, of course, is just a ByteBuffer sliced off and sharing the larger one in the TupleDescriptor, and the partial tuple is just a TupleTableSlot instance built over that. The full tuple is another complete copy, but only fetched when those less-commonly-needed attributes are requested. With those key values obtained from the raw buffer, the Attribute's name does not require any such contortions, and can be fetched using the civilized TupleTableSlot API, except it can't be done by name, so the attribute number is used for that one. An AttributeImpl.Transient holds a direct reference to the TupleDescriptor it came from, which its containingTupleDescriptor() method returns. An AttributeImpl.Cataloged does not, and instead holds a reference to the RegClass for which it is defined in the system catalogs, and containingTupleDescriptor() delegates to tupleDescriptor() on that. If the relation has been altered, that could return an updated new tuple descriptor.
RegClass is an easy choice, because those invalidations are also the invalidations of TupleDescriptors, and because it has a nice API; we are passed the oid of the relation to invalidate, so we acquire the target in O(1). (Note in passing: AttributeImpl is built on SwitchPointCache in the pattern that's emerged for CatalogObjects in general, and an AttributeImpl.Cataloged uses the SwitchPoint of the RegClass, so it's clear that all the attributes of the associated tuple descriptor will do the right thing upon invalidation. In contrast, TupleDescriptorImpl itself isn't quite built that way, and the question of just how a TupleDescriptor itself should act after invalidation hasn't been fully nailed down yet.) RegType is probably also worth invalidating selectively, as is probably RegProcedure (procedures are mainly what we're about in PL/Java. right?), though only RegType is done here. That API is less convenient; we are passed not the oid but a hash of the oid, and not the hash that Java uses. The solution here is brute force, to get an initial working implementation. There are plenty of opportunities for optimization. One idea would be to use a subclass of SwitchPoint that would set a flag, or invoke a Runnable, the first time its guardWithTest method is called. If that hasn't happened, there is nothing to invalidate. The Runnable could add the containing object into some data structure more easily searched by the supplied hash. Transitions of the data structure between empty and not-empty could be propagated to a boolean in native memory, where the C callback code could avoid the Java upcall entirely if there is nothing to do. This commit contains none of those optimizations. Factory.invalidateType might be misnamed; it could be syscacheInvalidate and take the syscache id as another parameter, and then dispatch to invalidating a RegType or RegProcedure or what have you, as the case may be. At least, that would be a more concise implementation than providing separate Java methods and having the C callback decide which to call. But if some later optimization is tracking anything-to-invalidate? separately for them, then the C code might be the efficient place for the check to be done. PostgreSQL has a limited number of slots for invalidation callbacks, and requires a separate registration (using another slot) for each syscache id for which callbacks are wanted (even though you get the affected syscache id in the callback?!). It would be antisocial to grab one for every sort of CatalogObject supported here, so we will have many relying on CatalogObject.Addressed.s_globalPoint and some strategy for zapping that every so often. That is not included in this commit. (The globalPoint exists, but there is not yet anything that ever zaps it.) Some imperfect strategy that isn't guaranteed conservative might be necessary, and might be tolerable (PL/Java has existed for years with less attention to invalidation). An early idea was to zap the globalPoint on every transaction or subtransaction boundary, or when the command counter has been incremented; those are times when PostgreSQL processes invalidations. However, invalidations are also processed any time locks are acquired, and that doesn't sound as if it would be practical to intercept (or as if the resulting behavior would be practical, even if it could be done). Another solution approach would just be to expose a zapGlobalPoint knob as API; if some code wants to be sure it is not seeing something stale (in any CatalogObject we aren't doing selective invalidation for), it can just say so before fetching it.