-
-
Notifications
You must be signed in to change notification settings - Fork 186
feat(api): switch to unkey for rate limiting #292
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughReplaces @upstash/ratelimit with @unkey/ratelimit, updates TRPC rate-limiting implementation to use UNKEY_ROOT_KEY, and moves route-level throttling from inline logic to meta-based annotations for early-access and places routers. Adds UNKEY_ROOT_KEY to env schema and turbo tasks. Updates package.json dependency. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
|
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (4)
packages/api/src/trpc.ts (2)
80-84: Avoid constructing a new Ratelimit instance on every requestInstantiating
Ratelimitper call adds overhead. Cache per unique (namespace, limit, duration) triple.Apply this diff to use a cached instance:
- const limiter = new Ratelimit({ - namespace: meta.ratelimit.namespace, - limit: meta.ratelimit.limit, - duration: meta.ratelimit.duration, - rootKey: env.UNKEY_ROOT_KEY, - }); + const limiter = getLimiterFor(meta, env.UNKEY_ROOT_KEY);Add this helper near the top-level (outside the middleware):
// cache: one limiter per (namespace,limit,duration) const limiterCache = new Map<string, Ratelimit>(); function getLimiterFor(meta: Meta, rootKey: string) { const key = `${meta.ratelimit.namespace}:${meta.ratelimit.limit}:${meta.ratelimit.duration}`; let limiter = limiterCache.get(key); if (!limiter) { limiter = new Ratelimit({ namespace: meta.ratelimit.namespace, limit: meta.ratelimit.limit, duration: meta.ratelimit.duration, rootKey, }); limiterCache.set(key, limiter); } return limiter; }
86-88: Fail-open path on limiter outages to preserve availabilityA transient Unkey outage would currently 500 the request if
limit()throws. Consider failing open and letting Sentry capture the error.- const result = await limiter.limit(identifier); + let result; + try { + result = await limiter.limit(identifier); + } catch (err) { + // Preserve availability on limiter errors; telemetry will be captured by Sentry middleware. + return next(); + }packages/api/src/routers/places.ts (2)
16-20: Resolver no longer needs ctx: fine; consider upstream failure mapping and provider reuse.
- Dropping
ctxfrom the handler is fine since rate limiting moved to middleware. Ensure the middleware still has access to the samectxto derive identifiers.- Optional hardening:
- Map upstream provider failures to a stable tRPC error (e.g., BAD_GATEWAY) and log once. This prevents leaking upstream error shapes and keeps clients consistent.
- Reuse a module-scoped
GooglePlacesProviderto keep the HTTP agent/DNS warm and slightly reduce per-request overhead.Module-scoped provider (outside this range, near imports):
// At module scope: const placesProvider = new GooglePlacesProvider();Then remove the per-request instantiation here:
- const placesProvider = new GooglePlacesProvider(); + // use module-scoped `placesProvider` (declared above) to reuse connectionsIf you decide to add upstream error mapping, you can minimally wrap the call:
- .query(async ({ input }) => { + .query(async ({ input }) => { // ... - return await placesProvider.autocomplete(input.input, { - languageCode: input.languageCode, - limit: input.limit, - }); + try { + return await placesProvider.autocomplete(input.input, { + languageCode: input.languageCode, + limit: input.limit, + }); + } catch (err) { + // import { TRPCError } from "@trpc/server" at the top if you adopt this + throw new TRPCError({ code: "BAD_GATEWAY", message: "Places upstream failed" }); + } }),Consider verifying whether your provider already implements timeouts/abort signals to avoid hanging calls that waste quota.
7-14: Meta-driven rate limiting verification complete
- Duration parsing
• TheMeta.ratelimit.durationfield is typed as a string literal (${number}${Unit}or${number} ${Unit}) and is passed directly into Unkey’sRatelimitconstructor. Unkey’s implementation natively understands humanized strings like"1m"and will convert to milliseconds at runtime.- Identifier scope
• IncreateTRPCContextwe setctx.rateLimit.id = session.user?.id ?? getIp(headers), and in the middleware we build the key as${procedureName}:${ctx.rateLimit.id}. That ensures authenticated users and unauthenticated clients are bucketed separately with a clear precedence (userId → IP).- Namespace granularity
• The"google-places"namespace is only used by this singleplaces.autocompleteendpoint. If your intention is to share a global quota across all Google Places calls, it’s fine as-is. For per-endpoint isolation, consider a more specific namespace (e.g."places:autocomplete") and a lower limit to tighten the budget:.meta({ procedureName: "places.autocomplete", ratelimit: { - namespace: "google-places", - limit: 20, + namespace: "places:autocomplete", + limit: 10, duration: "1m", }, })No further changes are required unless you want to adopt the optional tighter scope above.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
bun.lockis excluded by!**/*.lock
📒 Files selected for processing (6)
packages/api/package.json(1 hunks)packages/api/src/routers/early-access.ts(1 hunks)packages/api/src/routers/places.ts(1 hunks)packages/api/src/trpc.ts(2 hunks)packages/env/src/server.ts(1 hunks)turbo.json(2 hunks)
🔇 Additional comments (5)
packages/api/package.json (2)
37-37: Switched to @unkey/ratelimit — dependency change looks correctThe move to
@unkey/ratelimit@^2.1.2aligns with the middleware changes insrc/trpc.ts. No concerns on this line.
37-37: No stale @upstash/ratelimit references and lockfiles are cleanAll checks pass:
- No imports of
@upstash/ratelimitfound in the repo.- No occurrences of
@upstash/ratelimitin any lockfile.@unkey/ratelimitis pinned to the latest published v2.1.2.No further action required.
packages/api/src/trpc.ts (2)
5-5: Import migration to @unkey/ratelimit and env wiring look goodThe new import and server env usage are consistent with the PR goals.
Also applies to: 10-10
86-88: Clarify rate-limit bucket scopeCurrently, the identifier passed to the limiter is:
// packages/api/src/trpc.ts:79–84 const identifier = `${meta.procedureName}:${ctx.rateLimit.id}`; const result = await limiter.limit(identifier);This yields a separate bucket for each
procedureName+ user/IP combination. If instead you intend to share one bucket per namespace (regardless of procedure), you should remove the procedure name from the key:- const identifier = `${meta.procedureName}:${ctx.rateLimit.id}`; + const identifier = `${ctx.rateLimit.id}`;• Existing procedures with
.meta({ procedureName: “…”, ratelimit: { namespace, … } })(e.g.places.autocomplete,earlyAccess.joinWaitlist) will each get isolated buckets.
• If you prefer a shared quota across all procedures in a given namespace, switch to the simplified identifier above.Note on metadata/tags
The SDK’slimiter.limit(identifier, opts)currently only supportscostandasyncinopts—there’s no built-in way to attach custom metadata or tags for analytics in the rate-limit call itself. If you need to record procedure or user info for reporting, you’ll need to emit those details separately in your application logic.Please confirm which behavior you’d like (per-procedure vs. shared) so we can align the implementation accordingly.
packages/api/src/routers/early-access.ts (1)
26-33: Meta-based rate limiting is correctly appliedNamespace, limit, and duration look sane for a waitlist endpoint. This fits the new middleware.
| } | ||
| } | ||
|
|
||
| .mutation(async ({ input }) => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Prevent duplicate waitlist entries and normalize emails
There’s a race between the existence check and the insert, which can yield duplicates under concurrency. Also, email comparisons are typically case-insensitive; normalize before comparing/inserting.
- Add a unique constraint on
waitlist.emailat the database level. - Use
onConflictDoNothing(or equivalent) and normalize email:
// normalize
const email = input.email.trim().toLowerCase();
// atomic insert-or-ignore
const inserted = await db
.insert(waitlist)
.values({ email })
.onConflictDoNothing({ target: waitlist.email });
// report outcome
if (inserted.rowCount === 0) {
return { message: "You're already on the waitlist!" };
}
return { message: "You've been added to the waitlist!" };Consider returning the same generic message regardless of prior presence to avoid leaking list membership.
🏁 Script executed:
#!/bin/bash
# Check if a unique constraint or index already exists for waitlist.email
rg -nP --type=ts -g '!**/node_modules/**' '(unique|uniqueIndex)\(.*waitlist.*email' db schema 2>/dev/null || echo "No unique constraint found; consider adding one in the Drizzle schema."Length of output: 246
Add unique constraint on waitlist.email and use atomic insert with normalized emails
The current implementation still risks duplicate waitlist entries under concurrency and treats emails case-sensitively. We’ve verified there’s no unique constraint on waitlist.email in your Drizzle schema; please add one and switch to an atomic “insert-or-ignore” flow using a normalized email.
• db/schema.ts (Drizzle schema)
– Add .unique() (or a unique index) to the email column on the waitlist table.
• packages/api/src/routers/early-access.ts (mutation resolver)
– Normalize the incoming email via input.email.trim().toLowerCase().
– Replace the separate existence check + insert with:
diff .mutation(async ({ input }) => { - // existing exists-then-insert logic + const email = input.email.trim().toLowerCase(); + + const inserted = await db + .insert(waitlist) + .values({ email }) + .onConflictDoNothing({ target: waitlist.email }); + + if (inserted.rowCount === 0) { + return { message: "You're already on the waitlist!" }; + } + return { message: "You've been added to the waitlist!" }; })
• (Optionally) To prevent leaking who’s on the list, unify both outcomes under a generic message like "Your request has been received." [optional]
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| .mutation(async ({ input }) => { | |
| .mutation(async ({ input }) => { | |
| const email = input.email.trim().toLowerCase(); | |
| const inserted = await db | |
| .insert(waitlist) | |
| .values({ email }) | |
| .onConflictDoNothing({ target: waitlist.email }); | |
| if (inserted.rowCount === 0) { | |
| return { message: "You're already on the waitlist!" }; | |
| } | |
| return { message: "You've been added to the waitlist!" }; | |
| }) |
🤖 Prompt for AI Agents
In packages/api/src/routers/early-access.ts around line 39 and db/schema.ts, the
waitlist flow is vulnerable to race conditions and treats emails
case-sensitively; modify db/schema.ts to add a unique constraint on
waitlist.email (either .unique() on the column or create a unique index in the
Drizzle schema) and update the mutation in
packages/api/src/routers/early-access.ts to normalize the input with
input.email.trim().toLowerCase(), then replace the separate existence-check +
insert with an atomic insert-or-ignore/upsert (use Drizzle's insert with
onConflict/doNothing or a parameterized INSERT ... ON CONFLICT DO NOTHING) so
duplicates are prevented under concurrency; optionally return a non-revealing
generic message like "Your request has been received." for both outcomes.
| VERCEL_URL: z.string().optional(), | ||
| UPSTASH_REDIS_REST_URL: z.string().url(), | ||
| UPSTASH_REDIS_REST_TOKEN: z.string().min(1), | ||
| UNKEY_ROOT_KEY: z.string().min(1), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
UNKEY_ROOT_KEY is required — confirm usage stays server-only and available in non-prod
- It’s correctly typed as server-only. Keep imports to
@repo/env/server(not the client bundle) to avoid accidental exposure. - With
skipValidationoutside production, dev builds won’t fail if the var is missing, but your middleware will. If you expect local dev without Unkey keys, consider a fail-open path or making this conditional in dev.
If local dev without Unkey is common, consider a no-op limiter when UNKEY_ROOT_KEY is absent in non-production to preserve developer flow.
🏁 Script executed:
#!/bin/bash
# Ensure all env imports use the server entry to prevent leaking secrets
rg -nP '@repo/env(?!/server)' -g '!**/node_modules/**' || echo "OK: only server imports detected."Length of output: 827
Fix server-only imports for UNKEY_ROOT_KEY
The environment variable UNKEY_ROOT_KEY must remain strictly server-side to avoid leaking secrets. I’ve identified client imports that need to be updated:
- Replace
import { env } from "@repo/env/client"withimport { env } from "@repo/env/server"in:apps/web/src/components/settings-dialog/tabs/accounts.tsx(line 7)apps/web/src/lib/trpc/client.tsx(line 17)
Optional: If local development without Unkey credentials is expected, consider implementing a no-op or “fail-open” path when UNKEY_ROOT_KEY is missing in non-production builds to preserve developer flow.
🤖 Prompt for AI Agents
In packages/env/src/server.ts around line 20, the UNKEY_ROOT_KEY env var is
defined as a server-only secret; ensure it never gets bundled into client code
by replacing any client imports of the env module with the server version:
change import { env } from "@repo/env/client" to import { env } from
"@repo/env/server" in apps/web/src/components/settings-dialog/tabs/accounts.tsx
(line ~7) and apps/web/src/lib/trpc/client.tsx (line ~17). Verify these files
only use server-side code where executed on the server (or move secret-dependent
logic to API routes). Optionally, for local dev, add a guarded no-op or
fail-open branch that only runs in non-production when UNKEY_ROOT_KEY is missing
so devs aren’t blocked, but do not expose the real secret in client bundles.
| "NEXT_PUBLIC_VERCEL_ENV", | ||
| "UPSTASH_REDIS_REST_URL", | ||
| "UPSTASH_REDIS_REST_TOKEN", | ||
| "UNKEY_ROOT_KEY", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
UNKEY_ROOT_KEY added to build/dev env — make sure CI and preview envs provide it
Good to see UNKEY_ROOT_KEY wired into Turbo so local/dev builds won’t silently miss it. Ensure:
- All GitHub Actions/CI pipelines that run
build/devexportUNKEY_ROOT_KEY. - Any preview/staging deploy pipelines set it as well.
Consider listing UNKEY_ROOT_KEY in .env.example/docs to ease onboarding.
Also applies to: 52-52
🏁 Script executed:
#!/bin/bash
# Verify references to UNKEY_ROOT_KEY across workflows and docs
fd -H -a '.github/workflows' 2>/dev/null | xargs -r rg -n "UNKEY_ROOT_KEY" || echo "No references in workflows; ensure it's added where needed."
# Double-check no client code is reading UNKEY_ROOT_KEY directly
rg -nP --type=ts --type=tsx -g '!**/node_modules/**' 'process\.env\.UNKEY_ROOT_KEY' || echo "OK: no direct client reads found."Length of output: 251
🏁 Script executed:
#!/bin/bash
# Locate any .env, .env.example, or related environment files
fd -t f -g '.env*' 2>/dev/null
# Search for UNKEY_ROOT_KEY in environment files and Markdown docs
rg -n 'UNKEY_ROOT_KEY' -g '*.env*' -g '*.md' || echo "UNKEY_ROOT_KEY not found in env/example or docs"Length of output: 170
🏁 Script executed:
#!/bin/bash
echo "Listing all GitHub Actions workflow files:"
fd -t f -g ".github/workflows/*.yml" || echo "No workflow files found"
echo
echo "Checking for .env.example or .env.sample files:"
fd -t f -g ".env.example" -g ".env.sample" || echo "No example env files found"Length of output: 714
🏁 Script executed:
#!/bin/bash
set -e
echo "==> Finding GitHub Actions workflow files"
if [ -d .github/workflows ]; then
find .github/workflows -type f \( -name '*.yml' -o -name '*.yaml' \) -print
else
echo "⚠️ No .github/workflows directory found"
fi
echo
echo "==> Searching workflows for UNKEY_ROOT_KEY"
if [ -d .github/workflows ]; then
grep -RHn 'UNKEY_ROOT_KEY' .github/workflows || echo "⚠️ UNKEY_ROOT_KEY not found in any workflow"
fi
echo
echo "==> Checking for .env.example and .env.sample"
for f in .env.example .env.sample; do
if [ -f "$f" ]; then
echo "Found $f — checking contents"
grep -Hn 'UNKEY_ROOT_KEY' "$f" || echo "⚠️ UNKEY_ROOT_KEY missing in $f"
else
echo "ℹ️ $f not present"
fi
doneLength of output: 1107
Ensure UNKEY_ROOT_KEY is configured in CI workflows and example env files
Verification shows that although you’ve wired UNKEY_ROOT_KEY into turbo.json, it’s currently:
- Absent from all GitHub Actions workflows that invoke
build/dev(e.g..github/workflows/check-format.yaml,.github/workflows/check-types.yaml,.github/workflows/lint.yaml) - Missing in
.env.examplefor onboarding
Please add:
- In each workflow that calls
turbo run buildorturbo run dev, under the top-levelenv:section:env: UNKEY_ROOT_KEY: ${{ secrets.UNKEY_ROOT_KEY }}
- A placeholder entry in
.env.example:# Unkey service root key UNKEY_ROOT_KEY=your_unkey_root_key_here
Also confirm that any staging/preview deployment pipelines export the same environment variable.
🤖 Prompt for AI Agents
In turbo.json around line 19 the UNKEY_ROOT_KEY token is referenced but CI and
examples lack configuration; add UNKEY_ROOT_KEY to every GitHub Actions workflow
that runs `turbo run build` or `turbo run dev` by inserting under the workflow's
top-level `env:` section `UNKEY_ROOT_KEY: ${{ secrets.UNKEY_ROOT_KEY }}` (apply
to .github/workflows/check-format.yaml, check-types.yaml, lint.yaml and any
other workflows invoking turbo), add a placeholder entry to .env.example with a
comment like `# Unkey service root key` and
`UNKEY_ROOT_KEY=your_unkey_root_key_here`, and ensure any staging/preview
deployment pipelines also export the same environment variable from their
secrets.
Summary
UNKEY_ROOT_KEYenv variable and expose it in turbo configTesting
bun run formatbun run linthttps://chatgpt.com/codex/tasks/task_e_689c784cef0c832b851a235be447e22d
Summary by CodeRabbit