Skip to main content

Command Palette

Search for a command to run...

Boosting Monolith Performance: Caching with OpenResty, SRCache, and Redis

Updated
4 min read

Intro

When working on a monolith and traffic increases, you might decide to start caching to manage the new volume. If your system is modern or small, you have several caching strategies:

  1. Handle the caching logic within the code by integrating something like Redis directly into the app code.

  2. Use an external cache, such as a CDN or reverse proxy.

When the system is large and outdated, adding caching logic directly into the code might take months of work. So, you might opt for an external solution like a CDN.

A Content Delivery Network (CDN) such as Cloudflare or CloudFront is an easy solution in most cases. However, it might not suit your needs if:

  1. Your traffic is not public

  2. You need highly consistent and fast cache invalidation

  3. You have a write-heavy workload

So, if a CDN doesn't suit your needs and you can't modify the application code to handle caching, your option is to use a CDN alternative like reverse proxy caching, such as Nginx proxy_cache or Varnish.

OpenResty

OpenResty is a web server based on Nginx that can be extended using Lua code. Because it's built on Nginx, OpenResty is fast and reliable. Its ability to be customised with Lua code makes it extremely flexible.

OpenResty also includes a range of components and libraries, such as lua-resty-redis, lua-resty-mysql, and lua-resty-jwt.

SRCache

SRCache is an Nginx module that provides a transparent caching layer for various Nginx locations. This module allows customisation of the storage backend used, so it can be combined with Redis.

Since it's an Nginx module, you don't need OpenResty to use SRCache. However, if you want to use Lua code within SRCache locations, you will need OpenResty.

NGINX SRCache module workflow

An example of SRCache implementation using Redis, the example is from the SRCache github repository.

 location /api {
     default_type text/css;

     set $key $uri;
     set_escape_uri $escaped_key $key;

     srcache_fetch GET /redis $key;
     srcache_store PUT /redis2 key=$escaped_key&exptime=120;

     # fastcgi_pass/proxy_pass/drizzle_pass/postgres_pass/echo/etc
 }

 location = /redis {
     internal;

     set_md5 $redis_key $args;
     redis_pass 127.0.0.1:6379;
 }

 location = /redis2 {
     internal;

     set_unescape_uri $exptime $arg_exptime;
     set_unescape_uri $key $arg_key;
     set_md5 $key;

     redis2_query set $key $echo_request_body;
     redis2_query expire $key $exptime;
     redis2_pass 127.0.0.1:6379;
 }

A Practical Example

This is an example of a complete solution where we aim to cache the generated invoices. We use the invoice ID as the caching key.

SRCache internal locations use the rediscluster library to access a Redis cluster for storing our cache. In this example, we use the Redis get command, but with Lua code, we could use more complex Redis commands like hset.


    location ~* ^/invoice/([0-9]+)(.pdf)?$ {

        set $key "$1";
        set $ttl 432000; 

        set $cache_skip 0;
        srcache_store_skip $cache_skip;
        srcache_store_statuses 200 201 304;
        srcache_ignore_content_encoding on;

        set_escape_uri $escaped_key $key;

        srcache_fetch GET /redis-fetch key=$key&field=$field;
        srcache_store PUT /redis-store key=$key&field=$field&ttl=$ttl;

        proxy_pass http://backend;

        add_header 'L2-Cache-Fetch-Status' $srcache_fetch_status; 
        add_header 'L2-Cache-Store-Status' $srcache_store_status;
    }


    location = /redis-fetch {
        internal;

        content_by_lua_block {
            local config = {
                ...
                REDIS_CONFIG_OPTIONS
                ...
            }

            local redis_cluster = require "resty/rediscluster"
            local red_c = assert(redis_cluster:new(config))

            local args,err = ngx.req.get_uri_args()
            local key = assert(args["key"], "no key found")
            local field = assert(args["field"], "no field found")
            ngx.log(ngx.INFO, key, field, err)  --Debug only

            local data = assert(red_c:get(key))
            if data == ngx.null then
                return ngx.exit(404)
            end
            ngx.print(data)
        }
    }

    location = /redis-store {
        internal;

        content_by_lua_block {
            local config = {
                ...
                REDIS_CONFIG_OPTIONS
                ...
            }

            local redis_cluster = require "resty/rediscluster"
            local red_c = assert(redis_cluster:new(config))

            local args,err = ngx.req.get_uri_args()
            local key = assert(args["key"], "no key found")
            local ttl = assert(args["ttl"], "no ttl arg found")
            local value = assert(ngx.req.get_body_data(), "no value found")
            ngx.log(ngx.INFO, key, err)  --Debug only

            red_c:init_pipeline()
            red_c:set(key, value)
            red_c:expire(key,ttl,"NX")
            local data = assert(red_c:commit_pipeline())
        }
    }

Purging the cache

What if we want to invalidate a single invoice from our example below?

  location ~* ^/purge/([0-9]+)(.pdf)?$ {

        set $key "$1";

        content_by_lua_block {
            local config = {
                ...
                REDIS_CONFIG_OPTIONS
                ...
            }

            local redis_cluster = require "resty/rediscluster"
            local red_c = assert(redis_cluster:new(config))

            local data = assert(red_c:del(ngx.var.key)) 
            return ngx.exit(204)
        }
    }

I've kept these examples as simple as possible, but SRCache offers a lot of customisation. You can also explore other features like request coalescing, rate limiting, and more.

Performances

The solution in this example works very well because it uses two high-performing components: Redis and Nginx. I haven't done a full load test, but you can try it by setting up a simple stack with Docker.


Do you need help implementing something like this? Contact me on LinkedIn
https://www.linkedin.com/in/alessandro-marino-ac/