Michał Tyszkiewicz
12/14/2024
For GraphQL cache can significantly improve performance and reduce redundancy, particularly when the same data is queried multiple times within a short timeframe. In this post, I'll go over implementing caching using GraphQL Zeus to fetch data from our custom CMS. No worries this can also be applied just as effectively to other GraphQL APIs, such as Hygraph or Hasura. Whether you’re working with a headless CMS, an e-commerce backend, or any other GraphQL-powered service, caching is just as important in every scenario and the implementation should look very similar.
Caching provides a number of advantages like:
Let’s start with a simple query I made using GraphQL Zeus, which fetches homepage data from our CMS:
export const getHomepageData = async () => {
try {
const result = await chain('query')({
onehomepageBySlug: [{ slug: 'home' }, HomepageDataSelector],
});
return result;
} catch (error) {
console.error('failed to fetch home data', error);
}
};
This query retrieves all the homepage data at once and passes it to the relevant components. This allows users to update the homepage content directly through the CMS without needing to modify any code. Currently, this function is called in server-side props, meaning it triggers an API call every time the page is refreshed or visited.
To optimize this and reduce redundant API calls, we’ll implement caching by using the node-lru-cache library.
LRU stands for "Least Recently Used," a caching strategy that automatically removes the least recently accessed items when the cache reaches its limit. The node-lru-cache library offers several configuration options, but in our case to set up a basic GraphQL cache, we only need two key settings:
ttl
(Time to Live): This setting defines how long an item stays in the cache before it’s considered expired, specified in milliseconds.ttlAutopurge
: When enabled, this option automatically deletes expired items from the cache. Without it, expired items are only purged when the cache is next accessed which can potentially lead it to grow unbounded.import { LRUCache } from 'lru-cache';
const cache = new LRUCache({
ttl: 1000 * 60 * 5, // Set time to live to 5 minutes
ttlAutopurge: true, // Automatically clean up expired entries
});
To cache the query I mentioned earlier, we need to do three simple things:
export const getHomepageData = async () => {
const cacheKey = 'homepageData'; // Cache key for this entry
if (cache.has(cacheKey)) { // Check if the cache contains this key
return cache.get(cacheKey) as { onehomepageBySlug: HomepageDataType }; // Return cached result
}
try {
const result = await chain('query')({
onehomepageBySlug: [{ slug: 'home' }, HomepageDataSelector],
});
cache.set(cacheKey, result); // Store query result in the cache
return result;
} catch (error) {
console.error('failed to fetch home data', error);
}
};
This approach results in two scenarios:
As mentioned earlier, the cache key must be unique to avoid conflicts. This is especially important for queries that include parameters, such as locale
(for localization) or slug
(for specific blog pages). Without unique cache keys, you may end up with the same content being served for different locales or blog pages. To avoid this, it's best to use a dynamic cache key based on the query parameters.
export const getBlogpostBySlug = async ({ slug }: { slug: string }) => {
const cacheKey = `Blogpost:${slug}`; // Unique cache key based on the slug
By using this dynamic approach, we ensure that the correct data is returned for the given slug. If you were to use a static cache key like 'blogpost', you'd always get the first cached blog post, regardless of which slug is provided. The same issue would arise with localization: without unique keys, you'd end up serving the wrong content for different locales.
GraphQL is all about efficiency — retrieving only the data you need (unlike REST). It makes sense to take that a step further to further enhance performance. With a GraphQL cache setup like this we can reduce redundant API calls and significantly speed up queries, especially for frequently accessed data like homepage content or blog posts. Dynamic cache keys based on query parameters ensure that the data is correctly localized or filtered according to specific slugs and GraphQL Zeus and selectors keep everything type-safe. Together this solution can greatly improve your app's speed and scalability. As mentioned, even if you're not using GraphQL Zeus, the implementation should still be very similar to the above, so give it a try!