Repository limitations and recommendations
There are some limitations to be aware of when dealing with a large amount of data in your repo. Given the time it takes to stream the data, getting an upload/push to fail at the end of the process or encountering a degraded experience, be it on hf.co or when working locally, can be very annoying.
Recommendations
We gathered a list of tips and recommendations for structuring your repo. If you are looking for more practical tips, check out this guide on how to upload large amount of data using the Python library.
Characteristic | Recommended | Tips |
---|---|---|
Repo size | - | contact us for large repos (TBs of data) |
Files per repo | <100k | merge data into fewer files |
Entries per folder | <10k | use subdirectories in repo |
File size | <5GB | split data into chunked files |
Commit size | <100 files* | upload files in multiple commits |
Commits per repo | - | upload multiple files per commit and/or squash history |
* Not relevant when using git
CLI directly
Please read the next section to understand better those limits and how to deal with them.
Explanations
What are we talking about when we say “large uploads”, and what are their associated limitations? Large uploads can be very diverse, from repositories with a few huge files (e.g. model weights) to repositories with thousands of small files (e.g. an image dataset).
Under the hood, the Hub uses Git to version the data, which has structural implications on what you can do in your repo.
If your repo is crossing some of the numbers mentioned in the previous section, we strongly encourage you to check out git-sizer
,
which has very detailed documentation about the different factors that will impact your experience. Here is a TL;DR of factors to consider:
- Repository size: The total size of the data you’re planning to upload. There is no hard limit on a Hub repository size. However, if you plan to upload hundreds of GBs or even TBs of data, we would appreciate it if you could let us know in advance so we can better help you if you have any questions during the process. You can contact us at [email protected] or on our Discord.
- Number of files:
- For optimal experience, we recommend keeping the total number of files under 100k. Try merging the data into fewer files if you have more. For example, json files can be merged into a single jsonl file, or large datasets can be exported as Parquet files.
- The maximum number of files per folder cannot exceed 10k files per folder. A simple solution is to
create a repository structure that uses subdirectories. For example, a repo with 1k folders from
000/
to999/
, each containing at most 1000 files, is already enough.
- File size: In the case of uploading large files (e.g. model weights), we strongly recommend splitting them into chunks of around 5GB each.
There are a few reasons for this:
- Uploading and downloading smaller files is much easier both for you and the other users. Connection issues can always happen when streaming data and smaller files avoid resuming from the beginning in case of errors.
- Files are served to the users using CloudFront. From our experience, huge files are not cached by this service leading to a slower download speed. In all cases no single LFS file will be able to be >50GB. I.e. 50GB is the hard limit for single file size.
- Number of commits: There is no hard limit for the total number of commits on your repo history. However, from
our experience, the user experience on the Hub starts to degrade after a few thousand commits. We are constantly working to
improve the service, but one must always remember that a git repository is not meant to work as a database with a lot of
writes. If your repo’s history gets very large, it is always possible to squash all the commits to get a
fresh start using
huggingface_hub
’ssuper_squash_history
. Be aware that this is a non-revertible operation. - Number of operations per commit: Once again, there is no hard limit here. When a commit is uploaded on the Hub, each git operation (addition or delete) is checked by the server. When a hundred LFS files are committed at once, each file is checked individually to ensure it’s been correctly uploaded. When pushing data through HTTP, a timeout of 60s is set on the request, meaning that if the process takes more time, an error is raised. However, it can happen (in rare cases) that even if the timeout is raised client-side, the process is still completed server-side. This can be checked manually by browsing the repo on the Hub. To prevent this timeout, we recommend adding around 50-100 files per commit.