You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When setting up zfs_uploader against Scaleway Object Storage (Specifically their GLACIER tier), everything worked as expected except with one caveat: The max part number on Scaleway is 1000 rather than the 10000 used by AWS.
This resulted in an error when uploading with the default setup, since it calculated the part sizes based on 10,000 parts and eventually failed due to exceeding Scaleway's limit of 1,000 parts.
I resolved this for my use-case by simply modifying a number in job.py: Erisa@20ed42f however I feel that going forward it would be a good idea to allow configuration of this value in the zfs_uploader configuration file, and document it on the README.
You could also detect and change the values based on predefined provider limits, however it still would be nice to have the value in a user-configurable place.
The text was updated successfully, but these errors were encountered:
When setting up zfs_uploader against Scaleway Object Storage (Specifically their
GLACIER
tier), everything worked as expected except with one caveat: The max part number on Scaleway is 1000 rather than the 10000 used by AWS.This resulted in an error when uploading with the default setup, since it calculated the part sizes based on 10,000 parts and eventually failed due to exceeding Scaleway's limit of 1,000 parts.
I resolved this for my use-case by simply modifying a number in job.py: Erisa@20ed42f however I feel that going forward it would be a good idea to allow configuration of this value in the zfs_uploader configuration file, and document it on the README.
You could also detect and change the values based on predefined provider limits, however it still would be nice to have the value in a user-configurable place.
The text was updated successfully, but these errors were encountered: