A Cache Present
People love receiving cash for Christmas, but a cache is a much more useful gift for your performance-hungry web server or application.
Today we'll talk about CHI, a modern Cache Handling Interface for Perl -- sort of a DBI for caching.
USING CHI
Creating a cache looks like:
my $cache = CHI->new(
driver => '...',
namespace => '...',
# driver specific args
);
driver indicates the cache backend, which controls how the cache data will be stored. Available backends include Memory, File, BDB, Memcached, and Redis - see CPAN for a complete list - and creating your own driver is simple.
namespace is a string that keeps this cache from other caches on the same backend. Often it's the name of the caller's Perl package or script.
CHI
honors the standard get/set API that most cache modules use:
# Try to get value from cache.
#
my $data = $cache->get($key);
if ( !defined $data ) {
# Was not in cache. Compute $data here.
#
$data = ...;
# Store in cache with a 10 minute expiration time.
#
$cache->set( $key, $data, "10m" );
}
It also provides an all-in-one compute API, which is shorter and less error-prone:
# Try to get value from cache; if missing, call the sub
# and store the returned value.
#
my $data = $cache->compute($key, "10m", sub {
# Compute and return value here
});
FEATURES
With CHI
you get a lot of caching features under the tree, and you can use them no matter which backend you've chosen.
Automatic key/value serialization
You can store arbitrary values in the cache, including listrefs, hashrefs and combinations thereof; CHI
will automatically serialize and deserialize them for you. Automatic compression over a certain size is also an option.
You can also use arbitrary references as cache keys, e.g.
my $key = [$pub_id, $article_id, $page_id];
my $data = $cache->get($key, ...);
This saves you from the tedious and failure-prone process of composing multiple values into a key. And if your key is too long or too weird for your driver, CHI will digest and/or escape it for you.
Multilevel caches
You can chain multiple caches together in various ways. For example, here we place a size-limited memory L1 cache in front of a memcached cache. CHI
will look in the memory cache first; on a miss, it will consult memcached and write back the value into the memory cache.
my $cache = CHI->new(
driver => 'Memcached',
servers => [ "10.0.0.15:11211", "10.0.0.15:11212" ],
l1_cache => { driver => 'Memory', global => 1,
max_size => 1024*1024 }
);
Miss stampede avoidance
A miss stampede occurs when a popular cache item expires, and a large number of processes all rush to recompute it. CHI provides two ways to reduce or avoid this common cache problem - probablistic expiration (in which expiration occurs over a range, instead of a single fixed time) and busy locks (in which the first process sets a lock so that other processes know not to start recomputing).
Logging and statistics
You can tell CHI to log every cache hit, miss and set for debugging purposes. You can also tell CHI to output statistics about the performance of your caches, including the hit/miss rate and the average compute time for each namespace.
Happy caching all!
See Also
CHI