Home
JAQForum Ver 24.01
Log In or Join  
Active Topics
Local Time 14:50 29 Nov 2024 Privacy Policy
Jump to

Notice. New forum software under development. It's going to miss a few functions and look a bit ugly for a while, but I'm working on it full time now as the old forum was too unstable. Couple days, all good. If you notice any issues, please contact me.

Forum Index : Microcontroller and PC projects : ext4 tools @ 10TB...

Author Message
Grogster

Admin Group

Joined: 31/12/2012
Location: New Zealand
Posts: 9308
Posted: 07:39am 18 Mar 2023
Copy link to clipboard 
Print this post

Hello all.

I have just decided to consolidate all my smaller NAS boxes into one with a single 10TB WD Red drive.

All this is going fine, but I am worried about the SIZE of a 10TB drive.
ext4 seems to be great but I am interested in any FS tools you can download and run on ext4 Linux drives.  I am running Puppy Linux for the server, as it is so sleek and simple but just goes and goes and goes...  

I've never had a failure of my smaller WD Red drives, but 10TB is rather a lot to lose in one go, should the drive die.  I can send it back if it dies under warranty, but all the data - 10TB of it - would be lost in the process.

So, I want Linux-based ext4 FS checking tools that anyone can recommend.
I know all about ZFS, but I am not using that in this case, so....

I HAVE BACKUPS of everything being stored on the WD 10TB drive, but drives of this size do make me anxious as to their recoverability in some kind of FS-related issue.
As I have backups, I CAN copy everything back again, but 10TB takes a bit of time....

Again - IGNORE drive failure, I am only asking about FS corruption recovery options etc.

Thanks for any replies.  
Smoke makes things work. When the smoke gets out, it stops!
 
CaptainBoing

Guru

Joined: 07/09/2016
Location: United Kingdom
Posts: 2080
Posted: 08:50am 18 Mar 2023
Copy link to clipboard 
Print this post

that single drive... is it a single mechanism? what backup do you have of it? what impact would it have if it failed (that's a lot of data in one lump)?

coming from a loving place, not trying to be funny or anything.
 
Mixtel90

Guru

Joined: 05/10/2019
Location: United Kingdom
Posts: 6814
Posted: 09:19am 18 Mar 2023
Copy link to clipboard 
Print this post

Personally I'm not over happy with single big drives. I'd want a pair in RAID 1 at least. Mind you, as long as everything is backed up elsewhere I suppose it's fine. At least you can recover from some nasty encrypting all your data - something that a NAS doesn't protect against.

I just read a report that drives most often fail just before they turn 3 years old, if that's of any comfort. :)

e2fsck ?

https://www.2daygeek.com/fsck-repair-corrupted-ext4-file-system-linux/
Mick

Zilog Inside! nascom.info for Nascom & Gemini
Preliminary MMBasic docs & my PCB designs
 
Grogster

Admin Group

Joined: 31/12/2012
Location: New Zealand
Posts: 9308
Posted: 11:34pm 18 Mar 2023
Copy link to clipboard 
Print this post

One single 10TB drive.

I've had a good run with the WD Red series, but that is just me - others might hate them.  Drive brand loyalty seems to be rather person-specific!

Anyway, I am consolidating three separate servers into one using the 10TB drive.  This will also help to save some power, as I will only need to run one box 24/7 instead of three.

As mentioned, everything is backed up on other drives - I'm COPYING to the new 10TB setup, and will keep the old servers in a powered-down state somewhere outside this house, so I effectively have an off-site backup in the old boxes.

So, if the 10TB drive DID die, I can replace, and restore, it just takes a lot of time for 10TB

Will look into e2fsck.
Smoke makes things work. When the smoke gets out, it stops!
 
bigmik

Guru

Joined: 20/06/2011
Location: Australia
Posts: 2914
Posted: 12:21am 19 Mar 2023
Copy link to clipboard 
Print this post

Hi Grogster, All,

I run 2 NAS boxes,

One with 6 x 2TB drives and one with 5 x 3TB drives all in Raid5.  If a single drive fails (as has happened a couple of times) the raid will rebuild itself once a new drive is installed, albeit it takes nearly 2 days to do so.

I feel comfortable enough with that, plus I backup anything important onto both NAS’s as well as on my PC.

But nothing is infallible, a lightning strike can wipe everything.

IMHO use another 10TB drive that is EXTERNAL usb3 and do regular update backups from your internal 10TB drive and keep the external disconnected when not needed. Then it should be safe sitting on the shelf.

I would be extremely uncomfortable relying on a solitary drive to contain all of my data.

FWIW, I prefer WD drives as I have had too many Seagates fail without warning (and a few with ominous clicking that I could recover before they fully died)

Regards,

Mick
Mick's uMite Stuff can be found >>> HERE (Kindly hosted by Dontronics) <<<
 
tgerbic
Regular Member

Joined: 25/07/2019
Location: United States
Posts: 47
Posted: 05:37am 19 Mar 2023
Copy link to clipboard 
Print this post

I took a different approach to reliability and uptime.

I have two Fedora Linux workstations, my daily use AMD system and a similar i7 backup system.

The AMD system has 21T of storage arranged as three 3T drives and two 6T drives (divided in two to make four 3T partitions). If I have to restore anything, 3T is about all the patience I have. Drives are about half Seagate and half WD.
The root drive and two other 3T drives/partitions are backed up on a mirror set which are only mounted when backups occur. Another 3T drive is misc old files and backed up on the i7 system.

The i7 system is a functional mirror of the AMD system which is only powered up to do backups and OS upgrades to match the AMD system. It is present for quick recovery in case of a motherboard/PS failure on the AMD system and as an offline backup system. Just fire the i7 system up if something happens. Replace what has gone wrong with the AMD system and switch back. Only happened once due to a bad power supply.

All the computers, networking equipment and accessories are on a power conditioning UPS. It has a run time of at least an hour, maybe two with the screens and printers turned off.

It has been about 5 years since I had my last drive problem, and about 8 or 9 since the previous one. I had a couple of bad drives around 2002. Have not lost any files, had any corruptions or viruses/malware.

No other equipment has failed in 20+ years. I consider myself really lucky and really believe in on-line UPSs. I have not experienced a noticeable difference between Seagate vs. WD for reliability. I believe having the power conditioning and uninterrupted power have a lot to do with equipment longevity. Drives are always shutdown by the OS, rather than by a sudden loss of power.
 
Print this page


To reply to this topic, you need to log in.

© JAQ Software 2024