Skip to content

Commit

Permalink
upload: don't backoff as much for 500 errors
Browse files Browse the repository at this point in the history
When we encounter a server error (non-200) we always backoff.
However, many times this isn't a rate limit but some other
(possibly temporary) 500 error which we can retry with less
waiting.  Reduce the backoff scaling for these types of errors.

Signed-off-by: Bob Copeland <[email protected]>
  • Loading branch information
bcopeland committed Mar 13, 2018
1 parent e02dfff commit 40109c8
Showing 1 changed file with 9 additions and 1 deletion.
10 changes: 9 additions & 1 deletion upload-queue.c
Original file line number Diff line number Diff line change
Expand Up @@ -252,6 +252,7 @@ static void upload_queue_upload_all(const struct session *session, unsigned cons
long http_code;
bool http_failed_all;
int backoff;
int backoff_scale = 8;

while ((entry = upload_queue_next_entry(key, &name, &lock))) {

Expand Down Expand Up @@ -289,7 +290,7 @@ static void upload_queue_upload_all(const struct session *session, unsigned cons
if (i) {
lpass_log(LOG_DEBUG, "UQ: attempt %d, sleeping %d seconds\n", i+1, backoff);
sleep(backoff);
backoff *= 8;
backoff *= backoff_scale;
}

lpass_log(LOG_DEBUG, "UQ: posting to %s\n", argv[0]);
Expand All @@ -304,6 +305,13 @@ static void upload_queue_upload_all(const struct session *session, unsigned cons

lpass_log(LOG_DEBUG, "UQ: result %d (http_code=%ld)\n", curl_ret, http_code);

if (http_code == 500) {
/* not a rate-limit error; try again with less backoff */
backoff_scale = 2;
} else {
backoff_scale = 8;
}

if (result && strlen(result))
should_fetch_new_blob_after = true;
free(result);
Expand Down

0 comments on commit 40109c8

Please sign in to comment.