Quite recently at work, I was tasked with making improvements to the response time of our main service. The service was pretty slow for the work it was doing. There were some low-hanging fruits, and it got substantially faster with not a lot of effort. The next step was to reduce the number of external calls. Calls can be cached as the response from external service does not change that often. I chose a multi-level cache to reduce the latency as much as possible.
The multi-level here means the cache is used at different levels. The application level is combined with the shared network cache. So I thought why not rewrite this with DAPR. What is DAPR? I’m gonna quote the website here as it is on point “Dapr provides integrated APIs for communication, state, and workflow. Dapr leverages industry best practices for security, resiliency, and observability, so you can focus on your code.” You get a bunch of tools that you can use to build your apps.
How can DAPR help?
The shared cache part uses DAPR’s state management. The best part is that any state supported by DAPR will work here, and all is done with configuration. This example is using Redis but you can easily swap it for anything that is supported.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: 2nd-lvl-cache
namespace: app-namespace
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: "localhost:6379"
Code language: JavaScript (javascript)
DAPR gives you a uniform interface for every supported storage so no code changes are needed when you decide to move from Redis. The protocol I decided to use is HTTP as it is much well known. Below is an example of storing one item in the cache.
async def set_to_dapr(key: str, value: Any, ttl: int):
async with httpx.AsyncClient() as client:
await client.post(
f"{base_url}/v1.0/state/{DAPR_STATE_STORE}",
json=[{"key": key, "value": value, "metadata": {"ttlInSeconds": str(ttl)}}],
)
Code language: Python (python)
Code and conclusion
Here is the full solution in the form of GitHub repo. When compared with the implementation I have done at work this one is much simpler as the complexity of pluggable states or clients is handled by DAPR, while I had to make a proper hierarchy with dependency injection.
Oh man, DAPR is great.
2 replies on “Multi-level cache with DAPR”
Why no TTL for the in-memory cache?
Hey,
If you are referring to the code example in the repo https://github.com/inirudebwoy/multi-level-cache-with-dapr then it is an example. Very simplified one.
Normally it is required as memory cache can grow and use up host memory. So you need to limit the size some way. By using TTL, or by setting maximum size. OrderedDict would help, or functools.lru_cache.