You are viewing...

Cache in One Place

Updated on March 10, 2020 at the 12th hour
Posted under:

DISCLAIMER: All views are considered my own and you should not draw any conclusions on associates.

or the fewest places possible. Cache invalidation is hard, so why make your problems harder by multiplying it?

I've been thinking about a comment made here about "cacheless architecture": https://www.reddit.com/r/java/comments/dbtog0/where_is_my_cache_architectural_patterns_for/f2am2vn/

We went from a tangled caching mess architecture to a completely cacheless architecture at a higher volume than we could manage before. Clients request data once and keep it up to date with patch streams. Queries are efficiently indexed and inefficient access patterns are eliminated (databases are fast). All interfaces support batch processing and operations have bounded fan outs. We distribute largely by sharding rather than having a zillion heterogeneous microservices and we carefully choose where work is done. The closest thing we have to caches are projections.

There is no such thing as a cache miss or stale data in our system and performance, latency (request and e2e) and load capacity is significantly better than it was when we were working with caches and more traditional architectural patterns.

What I get from the comment is to create as few caches as possible.

Local caching in every application makes invalidations extraordinarily hard, so I stay away from it unless the cache data is mostly static. Non-player names, maps data, zip codes could be considered locally cacheable data. User data to me, if it is dynamic, has to be cached at the datastore. Depending on how often the data needs to be accessed, it could be locally cached for a very very short time on a hot path.

I used to work at a place where they cached heavily everywhere and that meant all servers had to listen to events to invalidate caches. If Memcached or Redis went down, it's GG come back another day. Recovery from caches outages there was hard, but preventable. Efforts went to mitigating cache going down. The root of the problem is the architecture. Wasn't a fun thing to have to deal with. I know they want(ed) to move to a service way of doing it and I still don't believe in that way of managing data access as it doesn't solve the existing problem in a adoptable way.

Facebook appears to mitigate their problem by treating cache exactly as you would on a CPU processor where traffic is directed to cache, if not in there to db and when the db updates, the log produced by it feeds back to cache. The datastore is instrumented to provide a unified invisible interface to data. No changes needed at the datastore client level. Just like how you would instrument a CPU so that code doesn't need to change. Nothing short of amazing.

Me? I choose Redis as a cache and primary data store. This frees me up to choose any other database as secondary. MySQL is the obvious secondary, but with the advent of pretty good distributed from the get go databases it may be hard to stick with. If Redis goes down, the secondary will be slow, but read/writable. It is cheap. Not every piece of data needs to be in Redis and can be served by the secondary database. Analysis of data would take place outside of the serving stores, so it does not need to be considered.

For web, caching fetched data on client side is unusually hard since no one really designs versioning into their data and the question of how to invalidate cache on client comes up. OOF! Some suggest using websockets or SSE to be able to push events to invalidate caches or just plainly push new data. The comment above about cacheless alludes to this, so that clients are not unnecessarily bombarding servers asking for data since they get the data pushed from the server. There might be times when the client has to ask the server for a range of data in order to fill in holes it did not obtain possibly due to disconnection.

Any request client that can allow users to implement the interfaces to call the existing routes will win the hearts and minds of everyone in the transaction. HTTP to get data in a stateless manner as usual while using websockets/SSE to bring in the latest updates. On a desktop or mobile application there are more options such as ZeroMQ, UDP, TCP and etc.

It is all about being conservative about where to cache data with the consideration for invalidation being eventual rather than never happening. Eventual requires a log.

You just read "Cache in One Place". Please share if you liked it!
You can read more recent posts here.