Software development teams often use in-memory caches to improve performance. They want to speed up access to, or reduce load on, a backing store (database, file system, etc.) by keeping some or all of the data in memory.
You should implement a system in the simplest way. Then after performance testing, you should optimize only when necessary. The reason for this is that performance optimizations make code more complex and may introduce defects. Often these defects are subtle, difficult to find and expensive to fix.
Developers often do not consider thorough testing of caching optimizations. They consider a cache as an implementation detail rather than functionality requiring testing.
A write-through cache approach writes to both the cache and the backing store at the same time. A service can use this approach when it is the single source of the backing store data in a system.
I recommend that you consider the following items when designing, implementing, and testing caches. I have seen live defects in almost all of these areas.
You need to have enough memory for the cache. What is the size of each object in the cache? What is the maximum number of objects in the cache? Do you have enough memory on your machine or JVM when the cache is full? What happens when the cache is full and you need to add one more object to it?
How is the cache initialized? If it’s initialized when your application starts, how long does that take when you cache the maximum number of objects? Can your application serve requests during initialization? If objects are cached as requests are made, is the response time of the cache miss or write-through transaction acceptable?
Verify that the cache is used. If the cache is not write-through you can request the data, then change the data in the backing store, then request it again and it should not change. Or you could request some data to cause it to be cached, then make the backing store inaccessible, and request the same data.
What happens when the backing store is unavailable? A request for data that is cached should probably return successfully. A request for data that is not cached should probably return an error or degraded functionality.
If the cache is not write-through (some other process writes to the backing store), you may have to refresh the data periodically or on demand. Is it acceptable for the cache to be different than the backing store? How long is it acceptable? Verify that the data is refreshed properly.
Will it hold all of the objects or only some of them? Which object do you evict when the cache is full and you need to replace the item? What eviction scheme do you use – Least Recently Used (LRU), Least Frequently Used (LFU) or First In First Out (FIFO) or do entries expire after a period? Verify the eviction scheme.
In addition, for testing and/or production support, you may need the ability to evict individual objects and/or all objects from the cache. One use case for this is you may want to fix some bad data in the database for a particular customer, then clear the cache just for that customer.
Finally, verify you get the performance improvement you expect. Test it using production traffic patterns. When the cache is live in production, monitor the size of the cache and the hit ratio (when a request finds the object in the cache) to determine if the cache works properly. You may need to make improvements.
I hope you agree with me that caches have a lot of functionality that deserves proper design, implementation and testing.
Have a question about something in this article? You can receive help directly from the article author. Sign up for a free trial to get started.