A low memory overhead read-through cache implemented using generations.
The GenerationalReadThroughCache<TKey, TValue> is a read-though cache used for caching items read from an underlying data source.
GenerationalReadThroughCache<TKey, TValue> implementes the IReadThroughCache<TKey, TValue> interface, which has the methods such as:
Maybe<TValue> Get(TKey key)tries to get a value for a keyTask<Maybe<TValue>> GetAsync(TKey key)tries to get a value for a key asynchronuslyMaybe<TValue>[] GetBatch(IReadOnlyCollection<TKey> keys)tries to get the values associated with some keysTask<Maybe<TValue>[]> GetBatchAsync(IReadOnlyCollection<TKey> keys)tries to get the values associated with the keys asynchronuslyvoid Invalidate(TKey key)evicts a specific key from the cachevoid Invalidate(IReadOnlycollection<TKey> keys)some keys from the cache
The return type is Maybe<TValue> is a struct that has a value or not, much like Nullable<T>, but work for both struct and class types.
IReadThroughCache<TKey, TValue> also has the following extension methods which may make the cache easier to consume in existing code:
bool TryGet(TKey key, out TValue value)tries to get a value for a keyTValue GetValueOrDefault(TKey key)tries to get a value for a keyTask<TValue> GetValueOrDefaultAsync(TKey key)tries to get a value for a keyTValue[] GetBatchValueOrDefault(IReadOnlyCollection<TKey> keys)tries to get the values associated with some keysTask<TValue[]> GetBatchValueOfDefaultAsync(IReadOnlyCollection<TKey> keys)tries to get the values associated with some keys
You can also compose caches with the following extension methods:
WithGenerationalCache(int? gen0Limit, TimeSpan? timeToLive)create a new read-through cache that has a Gen0 size limit and/or a periodic collection timeWithThunderingHerdProtection()adds ThunderingHerdProtection to a cache which prevents calling the data source concurrently for the same key.
ThunderingHerdProtection<TKey, TValue> can be used to prevent multiple threads calling an underlying database, or remote service, to load the value for the same key.
Different keys are handled concurrently, but indiviual keys are read by only one thread. This can be used on the client side, before calling a remote service or database,
and it can be used on the the server side.
The design is insipred by "generational garbage collection" in that:
- the cache as two generations
Gen0andGen1 - as items are read from the underlying data source they are added to
Gen0 - when a collection happens
Gen1is thrown away andGen0is moved toGen1 - when an item is read from
Gen1it is promted back toGen0
Internally, GenerationalReadThroughCachee<TKey, TValue> uses a lock (Monitor) to protect it's data structures, but the lock is released if a read to the underlying data source is required.
The GenerationalReadThroughCachee<TKey, TValue> contructor takes two arguments that control collection:
- if a
gen0Limitis set then a collection will occur whenGen0reaches that limit - if
timeToLiveis set then a an un-read entry will be evicted some time after this (unless thegen0Limitwas reached since the last collection)
One or both parameters neeed to be set, i.e.
- you can just specify a
gen0Limit, but then an the cache will never be cleared, even if it is not used again for a long time - you can just specify a
timeToLivewhich will let the cache grow to any size but will ensure items not used for "a long time" are evicted - you can specify both
gen0LimitandhalfLifeto combine the attributes of both
GenerationalReadThroughCachee<TKey, TValue> remembers the results of all read-though operations, so it records the value for a key or it records that a key does not have a value.
The following tests compare a generation cache with a Bit-Pseduo LRU cache.
| Test | Elapsed | Items in cache | Bytes allocated | Bytes held | Bytes held per key |
|---|---|---|---|---|---|
| BitPseudoLru 50% | 118 ms | 50,000 | 2,973,144 | 1,519,864 | 30.40 |
| Generational 25% Gen0 Limit | 34 ms | 33,332 | 3,782,700 | 895,764 | 26.87 |
| Generational 50ms Half-life | 25 ms | 100,000 | 6,050,996 | 3,129,228 | 31.29 |
| Concurrent Dictionary | 11 ms | 100,000 | 5,651,692 | 2,994,008 | 29.94 |
| Test | Elapsed | Items in cache | Bytes allocated | Bytes held | Bytes held per key |
|---|---|---|---|---|---|
| BitPseudoLru 50% | 2,062 ms | 250,000 | 9,686,520 | 6,529,452 | 26.12 |
| Generational 25% Gen0 Limit | 183 ms | 166,664 | 11,926,556 | 4,637,388 | 27.82 |
| Generational 50ms Half-life | 159 ms | 331,819 | 31,131,248 | 9,617,896 | 28.99 |
| Concurrent Dictionary | 205 ms | 500,000 | 37,960,620 | 16,607,692 | 33.22 |