K
- Key type.V
- Value type.I
- Input type.public abstract class CacheLoadOnlyStoreAdapter<K,V,I> extends Object implements CacheStore<K,V>
This class processes input data in the following way:
inputIterator(Object...)
.
batchSize
.
threadsCnt
working threads.
parse(Object, Object...)
method
and result is stored into cache.
Two methods should be implemented by inheritants:
inputIterator(Object...)
. It should open underlying data source
and iterate all record available in it. Individual records could be in very raw form,
like text lines for CSV files.
parse(Object, Object...)
. This method should process input records
and transform them into key-value pairs for cache.
Modifier and Type | Field and Description |
---|---|
static int |
DFLT_BATCH_QUEUE_SIZE
Default batch queue size (max batches count to limit memory usage).
|
static int |
DFLT_BATCH_SIZE
Default batch size (number of records read with
inputIterator(Object...) |
static int |
DFLT_THREADS_COUNT
Default number of working threads (equal to the number of available processors).
|
Constructor and Description |
---|
CacheLoadOnlyStoreAdapter() |
Modifier and Type | Method and Description |
---|---|
void |
delete(Object key) |
void |
deleteAll(Collection<?> keys) |
int |
getBatchQueueSize()
Returns batch queue size.
|
int |
getBatchSize()
Returns batch size.
|
int |
getThreadsCount()
Returns number of worker threads.
|
protected abstract Iterator<I> |
inputIterator(Object... args)
Returns iterator of input records.
|
V |
load(K key) |
Map<K,V> |
loadAll(Iterable<? extends K> keys) |
void |
loadCache(IgniteBiInClosure<K,V> c,
Object... args)
Loads all values from underlying persistent storage.
|
protected abstract @Nullable IgniteBiTuple<K,V> |
parse(I rec,
Object... args)
This method should transform raw data records into valid key-value pairs
to be stored into cache.
|
void |
sessionEnd(boolean commit)
Tells store to commit or rollback a transaction depending on the value of the
'commit'
parameter. |
void |
setBatchQueueSize(int batchQueueSize)
Sets batch queue size.
|
void |
setBatchSize(int batchSize)
Sets batch size.
|
void |
setThreadsCount(int threadsCnt)
Sets number of worker threads.
|
void |
write(javax.cache.Cache.Entry<? extends K,? extends V> entry) |
void |
writeAll(Collection<javax.cache.Cache.Entry<? extends K,? extends V>> entries) |
public static final int DFLT_BATCH_SIZE
inputIterator(Object...)
and then submitted to internal pool at a time).public static final int DFLT_BATCH_QUEUE_SIZE
public static final int DFLT_THREADS_COUNT
protected abstract Iterator<I> inputIterator(@Nullable Object... args) throws javax.cache.integration.CacheLoaderException
Note that returned iterator doesn't have to be thread-safe. Thus it could operate on raw streams, DB connections, etc. without additional synchronization.
args
- Arguments passes into IgniteCache.loadCache(IgniteBiPredicate, Object...)
method.javax.cache.integration.CacheLoaderException
- If iterator can't be created with the given arguments.@Nullable protected abstract @Nullable IgniteBiTuple<K,V> parse(I rec, @Nullable Object... args)
If null
is returned then this record will be just skipped.
rec
- A raw data record.args
- Arguments passed into IgniteCache.loadCache(IgniteBiPredicate, Object...)
method.null
if no entry could be produced from this record.public void loadCache(IgniteBiInClosure<K,V> c, @Nullable Object... args)
IgniteCache.loadCache(IgniteBiPredicate, Object...)
method is invoked which is usually to preload the cache from persistent storage.
This method is optional, and cache implementation does not depend on this
method to do anything. Default implementation of this method in
CacheStoreAdapter
does nothing.
For every loaded value method IgniteBiInClosure.apply(Object, Object)
should be called on the passed in closure. The closure will then make sure
that the loaded value is stored in cache.
loadCache
in interface CacheStore<K,V>
c
- Closure for loaded values.args
- Arguments passes into
IgniteCache.loadCache(IgniteBiPredicate, Object...)
method.public int getBatchSize()
public void setBatchSize(int batchSize)
batchSize
- Batch size.public int getBatchQueueSize()
public void setBatchQueueSize(int batchQueueSize)
batchQueueSize
- Batch queue size.public int getThreadsCount()
public void setThreadsCount(int threadsCnt)
threadsCnt
- Number of worker threads.public void writeAll(Collection<javax.cache.Cache.Entry<? extends K,? extends V>> entries)
public void delete(Object key)
public void deleteAll(Collection<?> keys)
public void sessionEnd(boolean commit)
'commit'
parameter.sessionEnd
in interface CacheStore<K,V>
commit
- True
if transaction should commit, false
for rollback.
GridGain In-Memory Computing Platform : ver. 8.9.15 Release Date : December 3 2024