Rate limiting the emails for our NextJS app with BullMQ

I thought I’d write something quick about how I handled a problem that came up for us on SPA recently. Here goes:

One of the many tasks the app has to do is send email notifications. Its important because it helps maintain visibility - if you get an email, you have a record that something happened, even if the data on the sever changes later.

But sometimes, we need to send a lot of emails - for instance, if we are emailing everyone who is late in submitting their marks. We ran into an issue when doing this - our SMTP provider rate limits us to something like 200 emails every 10 mins.

The basic answer to this problem is a message queue. A message queue lets you “queue up” jobs and process them later. There are lots of mature solutions in this space, and that’s great, because there are lots of good ways to solve it. The downside is that figuring out which one is right for you can be quite tricky.

I looked at two potential solutions for our use case. First is RabbitMQ. RabbitMQ is a very popular industry standard message brokering system. Its very powerful and has lots of capabilities. But for us, it’s probably a bit overkill, seeing as we are trying to just handle a rate limit on some emails.

Something better suited to our needs is BullMQ. BullMQ is node-native, and uses redis (or a compatible alternative like valkey) as a backend. It’s got a very simple architecture based on queues and jobs. It’s essentially perfect for our use case - it what it was designed to do.

Having settled on the tech to solve it, we then needed to work out how to integrate it with our existing code. The relevant parts here are NextJS and react-email (which we use to create pretty email templates).

Finally! The actual tutorial

To begin with, let’s define some simple config values we will need later:

export interface EmailJob {
text: string;
html: string;
subject: string;
to: string[];
cc?: string[];
}
export const EMAIL_QUEUE_NAME = "EMAIL_QUEUE";

EMAIL_QUEUE_NAME should be quite self-explanatory. It’s the name of the queue we are adding jobs to. BullMQ lets you have have as many as you need. Each queue can have a payload type attached to it. When you push a job to the queue, you need to provide that data. The worker can then access that data and do something with it. EmailJob is the type of the payload for our jobs. Note it has both a text and HTML field, so can specify both for the email.

Next, we set up a way to add jobs to the queue, which looks something like this:

import { env } from "@/env";
import IORedis from "ioredis";
import { type ReactElement } from "react";
import { render } from "@react-email/components";
import { Queue } from "bullmq";
import { EMAIL_QUEUE_NAME, type EmailJob } from "./config";
export const makeConnection = () =>
new IORedis({
host: env.REDIS_HOST,
port: env.REDIS_PORT,
maxRetriesPerRequest: null,
});
export const makeQueue = () => {
const emailQueue = new Queue<EmailJob>(EMAIL_QUEUE_NAME, { connection });
async function queueEmail({
message,
to,
subject,
cc,
}: {
message: ReactElement;
subject: string;
to: string[];
cc?: string[];
}) {
return await emailQueue.add("send-mail", {
to,
cc,
subject,
html: await render(message),
text: await render(message, { plainText: true }),
});
}
return QueueEmail;
};

This is basically just a convenient wrapper around emailQueue.add, though there are a few things to note. First, we don’t construct anything here at the top level. Instead, it’s all factories. I do it this way so that if we never use the queue, we never try to connect to redis. In the real app, we have an environment variable that lets you turn the queue on and off, and if we construct the redis connection without a valid host and port, it dumps a bunch of very annoying errors into the log. This way, we only attempt to connect if it’s necessary, and we can avoid the noise.

Second, it renders the reactEmail templates before sending off the job. We do it this way around for few reasons; firstly, it avoids any potential problems we might run into serialising ReactElements in redis. Secondly, it helps keep our worker simple. The alternative might be sending all the data required to render the email in the payload, but that would make the payload type much uglier. Another advantage is that as it stands, this matches an existing interface of our app:

export type SendMail = ({
message,
to,
subject,
cc,
}: {
message: ReactElement;
subject: string;
to: string[];
cc?: string[];
}) => Promise<void>;

This is the underlying handler for sending mail. We have a class (called ‘Mailer’) that handles all the boilerplate related to emails, and it takes a SendMail function in as an argument to its constructor. So using the new system is as simple as:

new Mailer(sendMail,),
new Mailer(
env.MAIL_USE_RATE_LIMIT === "ON" ? makeQueue() : sendMail,
)

Now we need a worker - something that can process these jobs. Unfortunately, this can’t just be attached to our NextJS server, and needs to live in a separate process. Next is designed to be able to work in serverless environments, where it spins up and down dynamically to handle requests. In such environments, there is no guarantee that Next is running at all at any given time - it just so happens that we want to run it locally in a container, and know that there will always be a live thread. A result of this is that Next does not have a mechanism for attaching long running side-car processes, so we need to roll our own.

Fortunately, this isn’t particularly difficult. Lets start by actually writing the worker script:

import { env } from "@/env";
import { Queue, Worker, type Job } from "bullmq";
import nodemailer from "nodemailer";
import { EMAIL_QUEUE_NAME, type EmailJob } from "./config";
import { connection } from "./redis-connection";
const emailQueue = new Queue<EmailJob>(EMAIL_QUEUE_NAME, { connection });
void emailQueue.setGlobalRateLimit(
env.MAIL_RATE_LIMIT,
env.MAIL_RATE_LIMIT_PERIOD,
);
const transporter = nodemailer.createTransport(
{
host: env.MAIL_HOST,
port: env.MAIL_PORT,
auth: env.MAIL_PASSWORD
? { user: env.MAIL_USER, pass: env.MAIL_PASSWORD }
: undefined,
},
{ from: { address: env.MAIL_USER, name: "SPA Support" } },
);
const mailWorker = new Worker(
EMAIL_QUEUE_NAME,
async (job: Job<EmailJob>) => await transporter.sendMail(job.data),
{ connection },
);
void mailWorker.run();

Note that this is the point where we set the actual rate limit.

As you can see, there really isn’t very much to it. It basically just grabs messages from the queue and sends them - very simple. The much more complicated part is setting this up to build. We decided to split it out to a separate container entirely. This helps keep things decoupled, so that if either the server or the worker dies, it doesn’t take the other one down with it.

To get this working, you need a ts-to-js transpiler (we use tsup) and then a docker file. Ours looks something like this:

# Builder:
FROM --platform=$BUILDPLATFORM node:22-alpine AS builder
WORKDIR /app
COPY package.json .
COPY pnpm-lock.yaml .
RUN corepack enable pnpm
RUN pnpm i --frozen-lockfile
COPY tsconfig.json .
COPY src src
RUN pnpm run build
# Runner:
FROM node:22-alpine AS runner
WORKDIR /app
RUN corepack enable pnpm
ENV NODE_ENV=production
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 mailworker
COPY --from=builder /app/node_modules/ ./node_modules
COPY --from=builder /app/build .
COPY --from=builder /app/package.json .
COPY --from=builder /app/pnpm-lock.yaml .
USER mailworker
CMD ["pnpm", "run", "start"]

And that’s all you really need. Connecting this in to the rest of the services with docker compose is pretty standard, so I won’t go into detail here - maybe another time.

In the future, we might look into using this system to implement digest emails - i.e. emails which collate a bunch of information from a timeframe and send them. That will probably be more involved, but for now this simple solution is good enough.

The code for all of this can be found in the SPA repo, and the mail-worker repo. Hope it was interesting!