-
Notifications
You must be signed in to change notification settings - Fork 11
Fix sync ingest for large number of requests #363
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| async def gather_with_concurrency(n, *tasks): | ||
| """Helper method to limit the concurrency when gathering the results from multiple tasks.""" | ||
| semaphore = asyncio.Semaphore(n) | ||
|
|
||
| async def sem_task(task): | ||
| async with semaphore: | ||
| return await task | ||
|
|
||
| return await asyncio.gather(*(sem_task(task) for task in tasks)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The POST requests were being fired before it acquired the Semaphore.
| if response.status == 503: | ||
| raise TimeoutError( | ||
| "The request to upload your max is timing out, please lower local_files_per_upload_request in your api call." | ||
| async with UPLOAD_SEMAPHORE: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only change here is the addition of the semaphore context
|
This pull request has been linked to Shortcut Story #594516: Actionable error message for Sync upload fails for 10k images. |
| ## [0.14.20](https://github.com/scaleapi/nucleus-python-client/releases/tag/v0.14.20) - 2022-09-23 | ||
|
|
||
| ### Fixed | ||
| - Local uploads are correctly batched and prevents flooding the network with requests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any estimate for up to which number / size of data things would work robustly now? Or is there basically no limit and it would just be annoyingly slow?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any estimate for up to which number / size of data things would work robustly now?
IMO it should robustly work for any number of item uploads, since the client is now correctly batching.
pfmark
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tested it with working internet now and works well for 10k items. Would be good to fix the progress bar, otherwise good to go from my side!
When trying to run local uploads with large number of items, ie: 1k+ the following error appeared:
Cannot connect to host api.scale.com:443 ssl:default [nodename nor servname provided, or not known”This (to my understanding) was caused by flooding the network with too many simultaneous requests.
It seems like the Semaphore that was previously implemented to control the
concurrencywas not controlling the POST requests properly.[sc-594516]