这是indexloc提供的服务,不要输入任何密码
Skip to content

Conversation

@JeanMeijer
Copy link
Collaborator

@JeanMeijer JeanMeijer commented Aug 13, 2025

Summary

  • replace Upstash with Unkey in tRPC rate limiting middleware
  • use Unkey for early access and places routers
  • add UNKEY_ROOT_KEY env variable and expose it in turbo config
  • restore Redis utility and dependency
  • specify rate limit metadata to use shared middleware in early access and places

Testing

  • bun run format
  • bun run lint

https://chatgpt.com/codex/tasks/task_e_689c784cef0c832b851a235be447e22d

Summary by CodeRabbit

  • New Features
    • Introduced declarative rate limits: Early-access waitlist (2 requests/min) and Places autocomplete (20 requests/min) for more consistent throttling.
  • Refactor
    • Migrated backend rate limiting to a new provider, simplifying implementation and improving reliability.
  • Chores
    • Added a required environment variable for the rate limiter.
    • Updated build and dev configurations to include the new environment variable.

@coderabbitai
Copy link

coderabbitai bot commented Aug 13, 2025

Walkthrough

Replaces @upstash/ratelimit with @unkey/ratelimit, updates TRPC rate-limiting implementation to use UNKEY_ROOT_KEY, and moves route-level throttling from inline logic to meta-based annotations for early-access and places routers. Adds UNKEY_ROOT_KEY to env schema and turbo tasks. Updates package.json dependency.

Changes

Cohort / File(s) Summary
Rate limiting infrastructure
packages/api/src/trpc.ts, packages/env/src/server.ts, turbo.json, packages/api/package.json
Switch ratelimiter from @upstash/ratelimit to @unkey/ratelimit; initialize limiter with rootKey from env.UNKEY_ROOT_KEY; remove Redis usage; adjust limit keying; add UNKEY_ROOT_KEY to env schema and turbo build/dev env; update dependency versions.
Router: early access
packages/api/src/routers/early-access.ts
Remove IP-based runtime rate limiting and ctx usage; add meta-based rate limit (namespace early-access-waitlist, 2 per 1m) with procedureName earlyAccess.joinWaitlist; mutation now only accepts input.
Router: places
packages/api/src/routers/places.ts
Remove inline rate limiting and ctx usage; add meta-based rate limit (namespace google-places, 20 per 1m) with procedureName places.autocomplete; resolver now only accepts input.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

  • feat: waitlist rate limit #59 — Similar rate-limiting work touching early-access router, env schema, package dependencies, and turbo.json, indicating a related refactor to centralized rate limiting.

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@vercel
Copy link

vercel bot commented Aug 13, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Project Deployment Preview Comments Updated (UTC)
analog Ready Preview Comment Aug 14, 2025 11:03am

@JeanMeijer JeanMeijer changed the title feat(api): use unkey for rate limiting feat(api): switch to unkey for rate limiting Aug 14, 2025
@JeanMeijer
Copy link
Collaborator Author

@coderabbitai review

@coderabbitai
Copy link

coderabbitai bot commented Aug 20, 2025

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (4)
packages/api/src/trpc.ts (2)

80-84: Avoid constructing a new Ratelimit instance on every request

Instantiating Ratelimit per call adds overhead. Cache per unique (namespace, limit, duration) triple.

Apply this diff to use a cached instance:

-  const limiter = new Ratelimit({
-    namespace: meta.ratelimit.namespace,
-    limit: meta.ratelimit.limit,
-    duration: meta.ratelimit.duration,
-    rootKey: env.UNKEY_ROOT_KEY,
-  });
+  const limiter = getLimiterFor(meta, env.UNKEY_ROOT_KEY);

Add this helper near the top-level (outside the middleware):

// cache: one limiter per (namespace,limit,duration)
const limiterCache = new Map<string, Ratelimit>();
function getLimiterFor(meta: Meta, rootKey: string) {
  const key = `${meta.ratelimit.namespace}:${meta.ratelimit.limit}:${meta.ratelimit.duration}`;
  let limiter = limiterCache.get(key);
  if (!limiter) {
    limiter = new Ratelimit({
      namespace: meta.ratelimit.namespace,
      limit: meta.ratelimit.limit,
      duration: meta.ratelimit.duration,
      rootKey,
    });
    limiterCache.set(key, limiter);
  }
  return limiter;
}

86-88: Fail-open path on limiter outages to preserve availability

A transient Unkey outage would currently 500 the request if limit() throws. Consider failing open and letting Sentry capture the error.

-  const result = await limiter.limit(identifier);
+  let result;
+  try {
+    result = await limiter.limit(identifier);
+  } catch (err) {
+    // Preserve availability on limiter errors; telemetry will be captured by Sentry middleware.
+    return next();
+  }
packages/api/src/routers/places.ts (2)

16-20: Resolver no longer needs ctx: fine; consider upstream failure mapping and provider reuse.

  • Dropping ctx from the handler is fine since rate limiting moved to middleware. Ensure the middleware still has access to the same ctx to derive identifiers.
  • Optional hardening:
    • Map upstream provider failures to a stable tRPC error (e.g., BAD_GATEWAY) and log once. This prevents leaking upstream error shapes and keeps clients consistent.
    • Reuse a module-scoped GooglePlacesProvider to keep the HTTP agent/DNS warm and slightly reduce per-request overhead.

Module-scoped provider (outside this range, near imports):

// At module scope:
const placesProvider = new GooglePlacesProvider();

Then remove the per-request instantiation here:

-      const placesProvider = new GooglePlacesProvider();
+      // use module-scoped `placesProvider` (declared above) to reuse connections

If you decide to add upstream error mapping, you can minimally wrap the call:

-    .query(async ({ input }) => {
+    .query(async ({ input }) => {
       // ...
-      return await placesProvider.autocomplete(input.input, {
-        languageCode: input.languageCode,
-        limit: input.limit,
-      });
+      try {
+        return await placesProvider.autocomplete(input.input, {
+          languageCode: input.languageCode,
+          limit: input.limit,
+        });
+      } catch (err) {
+        // import { TRPCError } from "@trpc/server" at the top if you adopt this
+        throw new TRPCError({ code: "BAD_GATEWAY", message: "Places upstream failed" });
+      }
     }),

Consider verifying whether your provider already implements timeouts/abort signals to avoid hanging calls that waste quota.


7-14: Meta-driven rate limiting verification complete

  • Duration parsing
    • The Meta.ratelimit.duration field is typed as a string literal (${number}${Unit} or ${number} ${Unit}) and is passed directly into Unkey’s Ratelimit constructor. Unkey’s implementation natively understands humanized strings like "1m" and will convert to milliseconds at runtime.
  • Identifier scope
    • In createTRPCContext we set ctx.rateLimit.id = session.user?.id ?? getIp(headers), and in the middleware we build the key as ${procedureName}:${ctx.rateLimit.id}. That ensures authenticated users and unauthenticated clients are bucketed separately with a clear precedence (userId → IP).
  • Namespace granularity
    • The "google-places" namespace is only used by this single places.autocomplete endpoint. If your intention is to share a global quota across all Google Places calls, it’s fine as-is. For per-endpoint isolation, consider a more specific namespace (e.g. "places:autocomplete") and a lower limit to tighten the budget:
 .meta({
   procedureName: "places.autocomplete",
   ratelimit: {
-    namespace: "google-places",
-    limit: 20,
+    namespace: "places:autocomplete",
+    limit: 10,
     duration: "1m",
   },
 })

No further changes are required unless you want to adopt the optional tighter scope above.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 207d274 and d972a29.

⛔ Files ignored due to path filters (1)
  • bun.lock is excluded by !**/*.lock
📒 Files selected for processing (6)
  • packages/api/package.json (1 hunks)
  • packages/api/src/routers/early-access.ts (1 hunks)
  • packages/api/src/routers/places.ts (1 hunks)
  • packages/api/src/trpc.ts (2 hunks)
  • packages/env/src/server.ts (1 hunks)
  • turbo.json (2 hunks)
🔇 Additional comments (5)
packages/api/package.json (2)

37-37: Switched to @unkey/ratelimit — dependency change looks correct

The move to @unkey/ratelimit@^2.1.2 aligns with the middleware changes in src/trpc.ts. No concerns on this line.


37-37: No stale @upstash/ratelimit references and lockfiles are clean

All checks pass:

  • No imports of @upstash/ratelimit found in the repo.
  • No occurrences of @upstash/ratelimit in any lockfile.
  • @unkey/ratelimit is pinned to the latest published v2.1.2.

No further action required.

packages/api/src/trpc.ts (2)

5-5: Import migration to @unkey/ratelimit and env wiring look good

The new import and server env usage are consistent with the PR goals.

Also applies to: 10-10


86-88: Clarify rate-limit bucket scope

Currently, the identifier passed to the limiter is:

// packages/api/src/trpc.ts:79–84
const identifier = `${meta.procedureName}:${ctx.rateLimit.id}`;
const result     = await limiter.limit(identifier);

This yields a separate bucket for each procedureName + user/IP combination. If instead you intend to share one bucket per namespace (regardless of procedure), you should remove the procedure name from the key:

- const identifier = `${meta.procedureName}:${ctx.rateLimit.id}`;
+ const identifier = `${ctx.rateLimit.id}`;

• Existing procedures with .meta({ procedureName: “…”, ratelimit: { namespace, … } }) (e.g. places.autocomplete, earlyAccess.joinWaitlist) will each get isolated buckets.
• If you prefer a shared quota across all procedures in a given namespace, switch to the simplified identifier above.

Note on metadata/tags
The SDK’s limiter.limit(identifier, opts) currently only supports cost and async in opts—there’s no built-in way to attach custom metadata or tags for analytics in the rate-limit call itself. If you need to record procedure or user info for reporting, you’ll need to emit those details separately in your application logic.

Please confirm which behavior you’d like (per-procedure vs. shared) so we can align the implementation accordingly.

packages/api/src/routers/early-access.ts (1)

26-33: Meta-based rate limiting is correctly applied

Namespace, limit, and duration look sane for a waitlist endpoint. This fits the new middleware.

}
}

.mutation(async ({ input }) => {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Prevent duplicate waitlist entries and normalize emails

There’s a race between the existence check and the insert, which can yield duplicates under concurrency. Also, email comparisons are typically case-insensitive; normalize before comparing/inserting.

  • Add a unique constraint on waitlist.email at the database level.
  • Use onConflictDoNothing (or equivalent) and normalize email:
// normalize
const email = input.email.trim().toLowerCase();

// atomic insert-or-ignore
const inserted = await db
  .insert(waitlist)
  .values({ email })
  .onConflictDoNothing({ target: waitlist.email });

// report outcome
if (inserted.rowCount === 0) {
  return { message: "You're already on the waitlist!" };
}
return { message: "You've been added to the waitlist!" };

Consider returning the same generic message regardless of prior presence to avoid leaking list membership.


🏁 Script executed:

#!/bin/bash
# Check if a unique constraint or index already exists for waitlist.email
rg -nP --type=ts -g '!**/node_modules/**' '(unique|uniqueIndex)\(.*waitlist.*email' db schema 2>/dev/null || echo "No unique constraint found; consider adding one in the Drizzle schema."

Length of output: 246


Add unique constraint on waitlist.email and use atomic insert with normalized emails

The current implementation still risks duplicate waitlist entries under concurrency and treats emails case-sensitively. We’ve verified there’s no unique constraint on waitlist.email in your Drizzle schema; please add one and switch to an atomic “insert-or-ignore” flow using a normalized email.

• db/schema.ts (Drizzle schema)
– Add .unique() (or a unique index) to the email column on the waitlist table.
• packages/api/src/routers/early-access.ts (mutation resolver)
– Normalize the incoming email via input.email.trim().toLowerCase().
– Replace the separate existence check + insert with:
diff .mutation(async ({ input }) => { - // existing exists-then-insert logic + const email = input.email.trim().toLowerCase(); + + const inserted = await db + .insert(waitlist) + .values({ email }) + .onConflictDoNothing({ target: waitlist.email }); + + if (inserted.rowCount === 0) { + return { message: "You're already on the waitlist!" }; + } + return { message: "You've been added to the waitlist!" }; })
• (Optionally) To prevent leaking who’s on the list, unify both outcomes under a generic message like "Your request has been received." [optional]

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
.mutation(async ({ input }) => {
.mutation(async ({ input }) => {
const email = input.email.trim().toLowerCase();
const inserted = await db
.insert(waitlist)
.values({ email })
.onConflictDoNothing({ target: waitlist.email });
if (inserted.rowCount === 0) {
return { message: "You're already on the waitlist!" };
}
return { message: "You've been added to the waitlist!" };
})
🤖 Prompt for AI Agents
In packages/api/src/routers/early-access.ts around line 39 and db/schema.ts, the
waitlist flow is vulnerable to race conditions and treats emails
case-sensitively; modify db/schema.ts to add a unique constraint on
waitlist.email (either .unique() on the column or create a unique index in the
Drizzle schema) and update the mutation in
packages/api/src/routers/early-access.ts to normalize the input with
input.email.trim().toLowerCase(), then replace the separate existence-check +
insert with an atomic insert-or-ignore/upsert (use Drizzle's insert with
onConflict/doNothing or a parameterized INSERT ... ON CONFLICT DO NOTHING) so
duplicates are prevented under concurrency; optionally return a non-revealing
generic message like "Your request has been received." for both outcomes.

VERCEL_URL: z.string().optional(),
UPSTASH_REDIS_REST_URL: z.string().url(),
UPSTASH_REDIS_REST_TOKEN: z.string().min(1),
UNKEY_ROOT_KEY: z.string().min(1),
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

UNKEY_ROOT_KEY is required — confirm usage stays server-only and available in non-prod

  • It’s correctly typed as server-only. Keep imports to @repo/env/server (not the client bundle) to avoid accidental exposure.
  • With skipValidation outside production, dev builds won’t fail if the var is missing, but your middleware will. If you expect local dev without Unkey keys, consider a fail-open path or making this conditional in dev.

If local dev without Unkey is common, consider a no-op limiter when UNKEY_ROOT_KEY is absent in non-production to preserve developer flow.


🏁 Script executed:

#!/bin/bash
# Ensure all env imports use the server entry to prevent leaking secrets
rg -nP '@repo/env(?!/server)' -g '!**/node_modules/**' || echo "OK: only server imports detected."

Length of output: 827


Fix server-only imports for UNKEY_ROOT_KEY

The environment variable UNKEY_ROOT_KEY must remain strictly server-side to avoid leaking secrets. I’ve identified client imports that need to be updated:

  • Replace import { env } from "@repo/env/client" with import { env } from "@repo/env/server" in:
    • apps/web/src/components/settings-dialog/tabs/accounts.tsx (line 7)
    • apps/web/src/lib/trpc/client.tsx (line 17)

Optional: If local development without Unkey credentials is expected, consider implementing a no-op or “fail-open” path when UNKEY_ROOT_KEY is missing in non-production builds to preserve developer flow.

🤖 Prompt for AI Agents
In packages/env/src/server.ts around line 20, the UNKEY_ROOT_KEY env var is
defined as a server-only secret; ensure it never gets bundled into client code
by replacing any client imports of the env module with the server version:
change import { env } from "@repo/env/client" to import { env } from
"@repo/env/server" in apps/web/src/components/settings-dialog/tabs/accounts.tsx
(line ~7) and apps/web/src/lib/trpc/client.tsx (line ~17). Verify these files
only use server-side code where executed on the server (or move secret-dependent
logic to API routes). Optionally, for local dev, add a guarded no-op or
fail-open branch that only runs in non-production when UNKEY_ROOT_KEY is missing
so devs aren’t blocked, but do not expose the real secret in client bundles.

"NEXT_PUBLIC_VERCEL_ENV",
"UPSTASH_REDIS_REST_URL",
"UPSTASH_REDIS_REST_TOKEN",
"UNKEY_ROOT_KEY",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

UNKEY_ROOT_KEY added to build/dev env — make sure CI and preview envs provide it

Good to see UNKEY_ROOT_KEY wired into Turbo so local/dev builds won’t silently miss it. Ensure:

  • All GitHub Actions/CI pipelines that run build/dev export UNKEY_ROOT_KEY.
  • Any preview/staging deploy pipelines set it as well.

Consider listing UNKEY_ROOT_KEY in .env.example/docs to ease onboarding.

Also applies to: 52-52


🏁 Script executed:

#!/bin/bash
# Verify references to UNKEY_ROOT_KEY across workflows and docs
fd -H -a '.github/workflows' 2>/dev/null | xargs -r rg -n "UNKEY_ROOT_KEY" || echo "No references in workflows; ensure it's added where needed."

# Double-check no client code is reading UNKEY_ROOT_KEY directly
rg -nP --type=ts --type=tsx -g '!**/node_modules/**' 'process\.env\.UNKEY_ROOT_KEY' || echo "OK: no direct client reads found."

Length of output: 251


🏁 Script executed:

#!/bin/bash
# Locate any .env, .env.example, or related environment files
fd -t f -g '.env*' 2>/dev/null

# Search for UNKEY_ROOT_KEY in environment files and Markdown docs
rg -n 'UNKEY_ROOT_KEY' -g '*.env*' -g '*.md' || echo "UNKEY_ROOT_KEY not found in env/example or docs"

Length of output: 170


🏁 Script executed:

#!/bin/bash
echo "Listing all GitHub Actions workflow files:"
fd -t f -g ".github/workflows/*.yml" || echo "No workflow files found"

echo
echo "Checking for .env.example or .env.sample files:"
fd -t f -g ".env.example" -g ".env.sample" || echo "No example env files found"

Length of output: 714


🏁 Script executed:

#!/bin/bash
set -e

echo "==> Finding GitHub Actions workflow files"
if [ -d .github/workflows ]; then
  find .github/workflows -type f \( -name '*.yml' -o -name '*.yaml' \) -print
else
  echo "⚠️  No .github/workflows directory found"
fi

echo
echo "==> Searching workflows for UNKEY_ROOT_KEY"
if [ -d .github/workflows ]; then
  grep -RHn 'UNKEY_ROOT_KEY' .github/workflows || echo "⚠️  UNKEY_ROOT_KEY not found in any workflow"
fi

echo
echo "==> Checking for .env.example and .env.sample"
for f in .env.example .env.sample; do
  if [ -f "$f" ]; then
    echo "Found $f — checking contents"
    grep -Hn 'UNKEY_ROOT_KEY' "$f" || echo "⚠️  UNKEY_ROOT_KEY missing in $f"
  else
    echo "ℹ️  $f not present"
  fi
done

Length of output: 1107


Ensure UNKEY_ROOT_KEY is configured in CI workflows and example env files

Verification shows that although you’ve wired UNKEY_ROOT_KEY into turbo.json, it’s currently:

  • Absent from all GitHub Actions workflows that invoke build/dev (e.g. .github/workflows/check-format.yaml, .github/workflows/check-types.yaml, .github/workflows/lint.yaml)
  • Missing in .env.example for onboarding

Please add:

  • In each workflow that calls turbo run build or turbo run dev, under the top-level env: section:
    env:
      UNKEY_ROOT_KEY: ${{ secrets.UNKEY_ROOT_KEY }}
  • A placeholder entry in .env.example:
    # Unkey service root key
    UNKEY_ROOT_KEY=your_unkey_root_key_here

Also confirm that any staging/preview deployment pipelines export the same environment variable.

🤖 Prompt for AI Agents
In turbo.json around line 19 the UNKEY_ROOT_KEY token is referenced but CI and
examples lack configuration; add UNKEY_ROOT_KEY to every GitHub Actions workflow
that runs `turbo run build` or `turbo run dev` by inserting under the workflow's
top-level `env:` section `UNKEY_ROOT_KEY: ${{ secrets.UNKEY_ROOT_KEY }}` (apply
to .github/workflows/check-format.yaml, check-types.yaml, lint.yaml and any
other workflows invoking turbo), add a placeholder entry to .env.example with a
comment like `# Unkey service root key` and
`UNKEY_ROOT_KEY=your_unkey_root_key_here`, and ensure any staging/preview
deployment pipelines also export the same environment variable from their
secrets.

@JeanMeijer JeanMeijer marked this pull request as draft August 22, 2025 00:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants