Use case

Monthly PDF bundles on a Cloudflare Cron Trigger

The second most common SaaS PDF use case, after invoice generation: monthly per-customer statement bundles. A few sections per customer (summary, line items, attachments) rendered from templates, merged into one PDF, stored, emailed. The whole pipeline runs on a single Cloudflare Worker fired by a Cron Trigger, completes in seconds per customer, and costs roughly nothing.

The setup

You run a SaaS. Customers want a monthly statement that combines multiple views — usage summary, line-item detail, maybe a compliance attachment. Each section already exists as a template; what you don't have is the glue that fills each template for each customer, stitches the sections in the right order, and gets the result in front of the customer on the 1st of the month.

The shape of the problem maps to a scheduled handler calling two PDF endpoints in sequence per customer: /api/fill-form to render each section from its template, then /api/merge to bundle the sections together. Cloudflare's Cron Triggers can fire a Worker on cron syntax — 0 0 1 * * for "midnight UTC on the first of every month" — which means no extra infrastructure beyond what's already in your Workers account.

The flow

  1. Cron → CF Worker: scheduled handler fires on the 1st of the month at 00:00 UTC
  2. Worker → D1 (or your CRM): pull the customer list and last-month's usage rows (~50-200ms)
  3. Worker → KV: fetch the two section templates once at the start, reused per customer (~10ms)
  4. Per customer, in parallel batches:
    • POST /api/fill-form twice (summary + line-items), ~200-300ms total
    • POST /api/merge with the two filled PDFs, ~80-120ms
    • Worker → R2: stash the bundle keyed by {customer_id}/{YYYY-MM}.pdf (~30-50ms)
    • Worker → email service: queue a transactional email with a signed download link (~10ms)

End-to-end per customer: ~400-600ms. A 1,000-customer fleet finishes in ~10 minutes if you fan out at concurrency 16, well under Cloudflare's scheduled-handler ceiling.

The Worker

~60 lines of TypeScript. The interesting part is the per-customer pipeline; the rest is plumbing. Concurrency is bounded by chunking the customer list into batches — Workers gives you fan-out via Promise.all but you'll hit subrequest limits if you fire 1,000 in flight.

export interface Env {
  STATEMENTS_KV: KVNamespace;       // section templates
  STATEMENTS_R2: R2Bucket;          // rendered bundles
  STATEMENTS_DB: D1Database;        // customers + usage
  EMAIL_WEBHOOK_URL: string;        // your transactional email
}

export default {
  async scheduled(event: ScheduledEvent, env: Env, ctx: ExecutionContext) {
    const period = lastMonthISO();        // e.g. "2026-04"
    const [summaryTpl, lineTpl] = await Promise.all([
      env.STATEMENTS_KV.get('summary-template', 'arrayBuffer'),
      env.STATEMENTS_KV.get('line-items-template', 'arrayBuffer'),
    ]);
    if (!summaryTpl || !lineTpl) {
      throw new Error('Templates missing from KV');
    }

    const { results: customers } = await env.STATEMENTS_DB
      .prepare('SELECT id, name, email FROM customers WHERE active = 1')
      .all<{ id: string; name: string; email: string }>();

    // Bound concurrency to 16 so subrequests don't pile up
    const chunk = 16;
    for (let i = 0; i < customers.length; i += chunk) {
      await Promise.all(
        customers.slice(i, i + chunk).map(c =>
          buildStatementForCustomer(c, period, summaryTpl, lineTpl, env)
        )
      );
    }
  },
};

async function buildStatementForCustomer(
  customer: { id: string; name: string; email: string },
  period: string,
  summaryTpl: ArrayBuffer,
  lineTpl: ArrayBuffer,
  env: Env,
) {
  const { results: usage } = await env.STATEMENTS_DB
    .prepare('SELECT * FROM usage WHERE customer_id = ? AND period = ?')
    .bind(customer.id, period).all();

  // 1. Render the two sections in parallel via /api/fill-form
  const summaryReq = fillForm(summaryTpl, {
    customer_name: customer.name,
    period_label:  prettyPeriod(period),
    total_amount:  totalFromUsage(usage),
    invoice_id:    `STMT-${period}-${customer.id.slice(-6)}`,
  });
  const linesReq = fillForm(lineTpl, {
    customer_name: customer.name,
    items_json:    JSON.stringify(usage.slice(0, 100)), // cap for template
  });
  const [summary, lines] = await Promise.all([summaryReq, linesReq]);

  // 2. Merge the two sections into one bundle
  const bundle = await mergePdfs([summary, lines]);

  // 3. Stash in R2 (organized by customer/period)
  const r2Key = `statements/${customer.id}/${period}.pdf`;
  await env.STATEMENTS_R2.put(r2Key, bundle, {
    httpMetadata: { contentType: 'application/pdf' },
  });

  // 4. Fire-and-forget email with a signed download URL.
  //    The URL is signed via R2's S3-compatible presign API so it
  //    expires in 7 days — no auth-boundary footgun like the
  //    Stripe-invoice post called out.
  const signedUrl = await presignR2(env.STATEMENTS_R2, r2Key, 7 * 86400);
  fetch(env.EMAIL_WEBHOOK_URL, {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({
      to: customer.email,
      subject: `Your ${prettyPeriod(period)} statement`,
      html: `<p>Your statement is ready: <a href="${signedUrl}">download</a></p>`,
    }),
  }).catch(() => {});
}

async function fillForm(template: ArrayBuffer, fields: Record<string, string>) {
  const fd = new FormData();
  fd.append('pdf', new Blob([template], { type: 'application/pdf' }), 'tpl.pdf');
  fd.append('fields', JSON.stringify(fields));
  const r = await fetch('https://pdfops.dev/api/fill-form', { method: 'POST', body: fd });
  if (!r.ok) throw new Error(`fill-form: ${r.status}`);
  return await r.arrayBuffer();
}

async function mergePdfs(parts: ArrayBuffer[]) {
  const fd = new FormData();
  for (const p of parts) fd.append('pdf', new Blob([p], { type: 'application/pdf' }), 'p.pdf');
  const r = await fetch('https://pdfops.dev/api/merge', { method: 'POST', body: fd });
  if (!r.ok) throw new Error(`merge: ${r.status}`);
  return await r.arrayBuffer();
}

The auth-boundary fix from the Stripe-invoice post is baked into this one — the email contains a 7-day R2 presigned URL, not the raw object path. If you're going to send N customers a link, signing the URL costs ~1ms per customer and removes the entire "Stripe ID is guessable" class of footgun.

The Cron Trigger config

One line in wrangler.toml:

[triggers]
crons = ["0 0 1 * *"]   # 00:00 UTC on the 1st of each month

That's the whole scheduling layer. Cloudflare bills you for the Worker's CPU time during the scheduled invocation, not for the cron itself. A 1,000-customer monthly run might use ~10 minutes of total Worker CPU once you account for concurrency, well inside any non-trivial Workers plan.

The cost math

For a 5,000-customer SaaS shipping monthly statements (one bundle = 2 fill-form calls + 1 merge call = 3 PDFops calls per customer = 15,000 calls/month):

The two-step shape (fill + merge) is where the substrate economics show up most clearly. Incumbent hosted PDF services price each operation as a separate render; on Workers, the marginal cost of the second call is the same as the first — there's no per-render runtime spin-up.

When this pattern doesn't fit

Try it

Both endpoints used in this pattern are live during beta:

# fill-form (per section)
curl -X POST https://pdfops.dev/api/fill-form \
  -F "pdf=@summary-template.pdf" \
  -F 'fields={"customer_name":"Acme Co","period_label":"April 2026","total_amount":"$2,450.00","invoice_id":"STMT-2026-04-001"}' \
  -o summary.pdf

# merge (combine the sections)
curl -X POST https://pdfops.dev/api/merge \
  -F "pdf=@summary.pdf" \
  -F "pdf=@line-items.pdf" \
  -o statement.pdf

You'll get a merged statement back. Wire it into a Cron Trigger handler shaped like the code above and you have monthly customer statements running globally with no servers to manage.

Missing endpoints, integration questions, scaling weirdness? The waitlist's message field is the fastest way to shape what ships next.