Skip to content

Conversation

@jean-lucas
Copy link
Contributor

@jean-lucas jean-lucas commented Sep 23, 2022

When trying to run local uploads with large number of items, ie: 1k+ the following error appeared:
Cannot connect to host api.scale.com:443 ssl:default [nodename nor servname provided, or not known”

This (to my understanding) was caused by flooding the network with too many simultaneous requests.

It seems like the Semaphore that was previously implemented to control the concurrency was not controlling the POST requests properly.

[sc-594516]

Comment on lines -31 to -39
async def gather_with_concurrency(n, *tasks):
"""Helper method to limit the concurrency when gathering the results from multiple tasks."""
semaphore = asyncio.Semaphore(n)

async def sem_task(task):
async with semaphore:
return await task

return await asyncio.gather(*(sem_task(task) for task in tasks))
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The POST requests were being fired before it acquired the Semaphore.

if response.status == 503:
raise TimeoutError(
"The request to upload your max is timing out, please lower local_files_per_upload_request in your api call."
async with UPLOAD_SEMAPHORE:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only change here is the addition of the semaphore context

@jean-lucas jean-lucas self-assigned this Sep 23, 2022
@shortcut-integration
Copy link

## [0.14.20](https://github.com/scaleapi/nucleus-python-client/releases/tag/v0.14.20) - 2022-09-23

### Fixed
- Local uploads are correctly batched and prevents flooding the network with requests
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any estimate for up to which number / size of data things would work robustly now? Or is there basically no limit and it would just be annoyingly slow?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any estimate for up to which number / size of data things would work robustly now?

IMO it should robustly work for any number of item uploads, since the client is now correctly batching.

Copy link
Contributor

@pfmark pfmark left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tested it with working internet now and works well for 10k items. Would be good to fix the progress bar, otherwise good to go from my side!

@jean-lucas jean-lucas merged commit 6108121 into master Sep 26, 2022
@jean-lucas jean-lucas deleted the jean-lucas-batch-uploads branch September 26, 2022 12:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants