usually at the destination with compression and dedup you’ll end up using less space than provisioned (even if you keep many days of history).
let me throw here a real life example of a 30 days worth NodeWeaver backup repo:
This archive: 3.68 TB --> NodeWeaver Images Allocated Space at source cluster
All archives: 121.62 TB --> what would it be to extract all backups from the repository
This archive: 857.45 GB --> All NodeWeaver Images on the source cluster once compressed
All archives: 24.58 TB --> all the backups would take this space compressed (from 121.62TB to 24.58TB)
This archive: 28.02 GB --> what has actually been sent from the source to backup
All archives: 1.73 TB --> total occupied space on backup storage (30 days, day by day recovery capable, of the whole 3.68 TB provisioned storage at source)
Of course you may need a bit more space if you want to extract all of your data, but in the end you don’t need to oversize your backup.
Note: in this example VM Images have of course free space, can’t tell you exactly how much, but there are about 30 VMs cluster doing active work, running on 3 all-flash nodes in production and 2 mainly-rotational+small-flashes(just in case) nodes for backup