This is a proxy server designed specifically for the Google Gemini API. It allows you to securely consolidate multiple API keys into a single endpoint and randomly select one for use with each request. This is useful for managing keys, load balancing, and integrating with front-end applications.
- Multi-Key Management: Pass multiple Google AI API keys, separated by commas, in the
x-goog-api-key
header. - Random Key Selection: A key is randomly selected from your provided list for each request, helping to distribute the load.
- Request Forwarding: Seamlessly forwards all requests to the Google Generative Language API (
https://generativelanguage.googleapis.com
). - Flexible Deployment: Optimized for Vercel, but also supports deployment using Docker.
We highly recommend using Vercel for a quick and easy one-click deployment.
- Click the "Deploy to Vercel" button above.
- Follow the instructions on Vercel to clone this repository and deploy it.
- Once deployed, you will receive a dedicated proxy URL.
You can also use Docker to deploy on any supported platform, such as Claw Cloud.
docker run -d \
-p 80:80 \
--name gemini-proxy \
--restart unless-stopped \
ghcr.io/spectre-pro/gemini-proxy
- Your proxy server will be running at
http://localhost:80
.