这是indexloc提供的服务,不要输入任何密码
Skip to content
This repository was archived by the owner on Mar 4, 2021. It is now read-only.

Conversation

@lantrix
Copy link

@lantrix lantrix commented May 26, 2015

Have found that with lots of Volume Tagging; and Volume Tagging Monkey being always on Monkey time - that Exceeding API rate limits is common when tagging meta data on Volumes.

This solves this issue; and also fixes (at least for Volume Tagging Monkey) Issue #173

lantrix added 6 commits April 30, 2015 21:28
Implement a user configurable Janitor Monkey OWNER_TAG_KEY
When a new leashed janitor monkey is run; there can be a situation where the expectedTerminationTime of the resource can be null (as per Issue Netflix#189). This ensures if it is null, that it won't call the java.util.Date functions which can't handle a null.
Using exponential backoff, will exponentially increase the backoff duration on each consecutive failure up to 5 failures.
This is being done more globally - so undoing these changes.
@cloudbees-pull-request-builder

NetflixOSS » SimianArmy » SimianArmy-pull-requests #8 SUCCESS
This pull request looks good

@cloudbees-pull-request-builder

SimianArmy-pull-requests #180 SUCCESS
This pull request looks good

@ebukoski
Copy link
Contributor

Did you consider using the Amazon ClientConfiguration class instead of rolling custom retry code?

I did something similar on a fork to handle API limits with SimpleDB. The code is simpler and it is globally applied instead of having to roll custom retry code at every client interaction. In my case I just increased the retry count.

ClientConfiguration config = new ClientConfiguration();
configuration.setMaxErrorRetry(11);
....
client = new AmazonSimpleDBClient(configuration);

I experimented with custom retry policies but increasing the retry count worked just as well and was less code.

@lantrix
Copy link
Author

lantrix commented May 27, 2015

No; I did a quick and dirty hack for a limit I was seeing in production use :)

However it's a good idea. I'll look into it; and have a look at your latest commit. ebukoski@97c1a3a

@lantrix
Copy link
Author

lantrix commented May 27, 2015

AWS explain that you should implement exponential back-off when you receive the server (5xx) or throttling errors.
http://docs.aws.amazon.com/general/latest/gr/api-retries.html
I'll close this and re-implement globally.

@lantrix lantrix closed this May 27, 2015
@lantrix lantrix deleted the RequestLimitExceeded-exponentialBackoff branch May 27, 2015 02:36
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants