The purpose of the IDataCacheProvider is to provide caching functionality to the application which allows it to reduce the workload of the GC by re-use of objects, and to decouple the amount of data which can be loaded into the application from the actual size of the java heap. The primary target of the implementation is to provide this functionality for arrays of primitive values (byte, short, int, long, float, double). Specialized access methods exist to handle primitive types, while a set of generic methods allows concrete implementations of the cache to provide such functionality for arbitrary classes.

The cache functionality is accessed through the DataCacheProviderFactory which returns a singleton of an implementation registered via the "com.agfa.pacs.core.shared.DataCacheProvider" extension point.

Functionality

IDataCacheProvider provides three different types of functionality accessible through corresponding sets of methods:

Item Types and Priorities

Each item put under control of the cache can be associated with a specific priority (0..MAX_ITEM_PRIORITY) which influences the probability of this item being re-used by the cache as part of it's pool functionality. Priority 0 will result in an early re-use, MAX_ITEM_PRIORITY leads to a longer life-time. Items with the same prioirty should be re-used following a LRU scheme.

Each personalized or persistent data item within the cache is identified using a unique CacheID. The CacheID should be created using the createID() method for session persistent or personalized objects or createID(group, item, persistenceType) for objects which are intended to persist across application starts. The group/item pair has to allow identification of a specific item throughout multiple runs of the application. This is vital so the object can be reaccessed by this name/group during the next run of an application. The three persistence types are:

Heap Management

To support the Java heap management and the internal memory management of the cache an additional mechanism is provided through the MemoryAlertHandler to allow arbitrary parts of the application to participate in handling memory shortages. The MemoryAlertHandler allows to register IMemoryAlertListeners which are notified if memory is running low and heap space has to be freed to allow the cache to perform a requested allocation operation. A listener registered with addMemoryAlertListener is notified if an allocation fails or is expected to fail due to low heap memory. It should try to free internally used memory to allow the allocation to be completed. A listener registered with addPersistentMemoryAlertListener is notified whenever disk space is running low, and persistent data has to be removed from disk. In case no disk is used for caching and all persistent data remains in the heap, the persistentMemoryAlert listeners are also notified if the heap funs low and memoryAlertListeners were not able to free sufficient memory.