-
Notifications
You must be signed in to change notification settings - Fork 129
Description
After upgrading to v2.0.2 I'm no longer able to see metrics for one org. Another org works. [/+]
Environment variables for non-working org:
LOG_LEVEL = "debug"
LISTEN_PORT = "9171"
GITHUB_RATE_LIMIT = 20000
GITHUB_APP = true
GITHUB_APP_ID = 123456
GITHUB_APP_INSTALLATION_ID = 08765309
GITHUB_APP_KEY_PATH = "/secrets/key.pem"
ORGS = "MyFirstOrg"Working org environment variables:
LOG_LEVEL = "debug"
LISTEN_PORT = "9172"
GITHUB_RATE_LIMIT = 20000
GITHUB_APP = true
GITHUB_APP_ID = 123457
GITHUB_APP_INSTALLATION_ID = 18765309
GITHUB_APP_KEY_PATH = "/secrets/key.pem"
ORGS = "MySecondOrg"Logs from both orgs say:
time="2025-10-22T17:10:03Z" level=info msg="Starting Exporter"
time="2025-10-22T17:13:09Z" level=info msg="collecting metrics"
time="2025-10-22T17:18:09Z" level=info msg="collecting metrics"The working org does show the new resource labels for github_rate_limit metrics:
{__name__="github_rate_limit", instance="10.51.156.118:9172", job="foo", org="MySecondOrg", resource="actions_runner_registration"}
The working org only has ~20 repos, while the non-working one has over 200.
What could I try to get this working again?
Update / New Findings:
After further investigation, I've identified a likely cause for the regression after upgrading:
Previous versions of the exporter used a GITHUB_RATE_LIMIT environment variable. This setting allowed the exporter to proactively refresh the GitHub App token if the remaining API rate limit dropped below the configured threshold (default 15,000). This logic ensured that for organizations with large numbers of repositories or high API usage, the exporter could avoid exhausting the GitHub API quota by generating a new token before hitting the rate limit.
This logic was removed in commit ce81cac6997e866824714472fa9a893140f4951e. The exporter now only refreshes tokens on expiry, not on low quota, relying on the standard go-github library behavior.
Impact:
- For large organizations or those with high-frequency scraping, the exporter can now hit the GitHub API rate limit and stop collecting metrics until the quota resets, since it no longer refreshes tokens proactively.
- This explains why after upgrading, metrics collection fails for large orgs but continues to work for smaller orgs.
Request:
Would it be possible to restore the configuration and logic that allowed quota-based token refreshing (using GITHUB_RATE_LIMIT or similar) in the current version? This is important for supporting metrics collection in environments with high API usage, where hitting the rate limit is otherwise unavoidable.
Thank you for looking into this!