Backblaze would be the cheapest option, but 15tb is going to take a while unless you have an incredible upload speed.
Self-Hosted Main
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
For Example
- Service: Dropbox - Alternative: Nextcloud
- Service: Google Reader - Alternative: Tiny Tiny RSS
- Service: Blogger - Alternative: WordPress
We welcome posts that include suggestions for good self-hosted alternatives to popular online services, how they are better, or how they give back control of your data. Also include hints and tips for less technical readers.
Useful Lists
- Awesome-Selfhosted List of Software
- Awesome-Sysadmin List of Software
Have ordered AWS snowcone service to upload 10TB data to S3, then downloaded directly from S3 to other site of my company, it takes about 200 bucks. We use borg to backup data and compress at high level before copying to snowcone, the original data might be 30TB.
- Buy 2 20 TB disks and put them in an old PC you have lying around.
- Create a ZFS mirror on that PC
- transfer the data to that PC
- create your ZFS array in your server and transfer the data back
- use the PC with the 20 TB mirror as backup system
Sorry to be this guy but do you not have a backup of your data? This would solve this and other potential future issues.
Can't you setup a new array with your new 3 drives, copy the data over, then expand your array with your old drives? IIRC zfs added support for expanding pools last year or so?
If you want to use a cloud provider, then your main problem will be uploading this much data and then downloading it back again, both price and time wise.
An easier solution is in fact copying the data over to another storage. Maybe borrow some pendrives from friends/family?
Expanding an array does work but you get less usable space compared to starting a new array from scratch. The old data doesn't get restriped so you get a less efficient parity-to-data ratio, and this effect accumulates as you add more drives.
If you want to use a cloud provider, then your main problem will be uploading this much data and then downloading it back again, both price and time wise.
If you happen to have a symmetric gig pipe (or larger) that definitely helps :) I currently have 27 TBs in the cloud, for most of it, it was just easier to re-download it since the cloud VM had a 10 gig pipe compared to my gig down and 30 Mbps up (cries in Comcast...I moved and now I have access to AT&T fiber at up to 5 Gbps, its like $250/month though). I thnk I'm gonna move everything back locally, doing everything in the cloud was a temporary solution for when I lived with my parents for a few months. They weren't too happy about the electricity bill of my 15 HDD, 8 NVME drive, 2x10G nic, 128 GB ECC, 1KW PSU running a Threadripper 2970wx, liquid cooled of course ...and then having the AC running 24/7 to keep i cool. I had all of that crammed in a 4U server, which was pared down from my larger on which I was using at my other apartment.
with could storage I find I'm able to download large files from the webUI, but I could attach the bucket and just get the data from here.
offtopic, what data do you store? Photos? I was always curious about people who use a lot of storage
Burn 3,192 DVDs
Mdadm can be set up with one drive. Then just add the second drive after data transfer.
I have like 7tb of data on my current raid array, and in the future, I plan to wipe it and make it ZFS, with 3 additional 7tb drives. I'd like to not lose all the data. I'm sure I can't be the only one who has this issue. What do you guys use for temporary backup solution, while repurposing your HW.
Realize I have no way to backup 10s of TBs, cry, destroy my ZFS pool and start over. 90% of my stuff is media which I can easily reacquire from Usenet in about 1.5-2 weeks of 24/7 downloading. So, not really a hassle, it just takes forever.
If you have good upload bandwidth (like if you're on fiber and have a 1 gig upload) IDriveE2.com has pretty reasonable pricing, it's object storage (think S3) not block storage (any normal filesystem) though, but if you're just using it for a backup that shouldn't matter. I apparently got in right before they doubled their prices and got 50 TB for a year for $500. They have "on demand pricing" which is more expensive than their yearly plans, but it's currently $4/TB/month so that would only cost you like $30...assuming you can upload all your data to their servers.
rsync.net
I don’t know what your 7TB is, but I know what my TBs of data are and my approach in this situation was to just nuke anything that was available from its original source, backup the few things that were orphaned to an external drive and pull everything down again onto the new array.