Re: [vox-tech] RAID systems
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [vox-tech] RAID systems
--- Bill Broadley <firstname.lastname@example.org> wrote:
> On Fri, Oct 15, 2004 at 02:08:42PM -0700, Jan W wrote:
> > Hi all:
> > I just finished setting up 3 tb raid systems, and I have some
> > questions about raid:
> > 1. What are the best methods to get benchmarks for speed/timing?
> Measuring a real world workload in real world conditions. Short
> of that I'd recommend bonnie++ and "PostMark: A New File System
Right now all I have been doing is cron'ing iostat to give me snapshots
every few minutes.
> > 2. What are the best recovery tools in case of failure?
> Tape backups, preferably offsite. Other than that as long as you
> lose more disks then you have redundancy for it's just a matter of
> rebuilding onto the replaced disks.
> Of course if you had a user error, os error, major hardware problem,
> flood, electrical issue, malicious user, theft, earthquake, fire, etc
> you could be out of luck.
Yea, the worst is always what I plan for with these sorts of things,
but I guess no system is foolproof or failsafe.
> > 3. What filesystem/raid options work best for lots of small files
> > being written often?
> lots often? Care to quantify these? Ext3 isn't a bad place to start
> and I believe is the most heavily tested. Reiserfs seems to
> in huge directories of small files. XFS seems to specialize in huge
> files and high bandwidths. Is your application disk limited? The
> postmark benchmark above will let you quantify the performance of
> a particular mixture of filesizes reads and writes.
The best idea I have of the population of files that will be stored is:
random. I have general statistics, but they can change on even a
daily basis. Most of the storage would be for millions of <64k text
files, but not always.
> In some cases raid-1 can make more sense performance wise (of course
> at a higher disk space overhead).
> > Here is the setup I have now:
> > 2.6 kernel (fedora core)
> > mdadm for managing metadevice
> > raid 5 -- default 64k chunk size
> > SATA for all raid drives
> > ext3 fs with -- extra inodes for lotsa little files and -- 4k block
> I believe ext3 will allocate additional inodes as needed, no need to
One of the previous raid systems (scsi hardware raid) that we had ran
out of inodes (it was formatted ufs and ran in solaris) in the first
month or two that we used it for production. I just don't want to make
the same mistake twice...
> > size with a stride of 16 so that it matches the raid chunk size
> Do you have an existing production system? How many files total?
> How many files per directory (average), what is the average filesize?
As mentioned before, pretty randomized populations, and there's a high
degree of variance between projects. Basically, we are sent huge
populations of data, we process the data into different formats, and
return it. The input data are mostly correspondance (email, word docs,
spreadsheets, etc), but that is generally just a rule of thumb... The
populations are simply moving targets that vary widely from each
project, and that is all that I have to go on... :)
For some projects, there can be 3 million files where 99% are less than
4k in size. For others there can be 3000 files where all are more than
128k. Most fall somewhere in between. Knowing exact numbers would
mean that I could tell the future and know what would be coming in the
door (which would be cool...).
> I wouldn't explicitly set the block size unless you have hard numbers
> to prove the optimal choise.
Again, here is my dilemma. I just chose something that would hopefully
e "good enough(tm)" to use everyday, and something that would handle 30
gazillion 2k files (I for-sure know there will be gazillions of emails,
most of which are less than 2k, what I don't know is the ratio of
smaller files to larger files).
> > Any and all comments are welcome, I just need to get a better idea
> > what people are using for their production systems (and how the
> > are used). I don't need anything special, just a rock-solid file
> > server used by everyday production systems.
> IMO ext3. One quick trick is to cp /proc/mdstat (once you are happy
> with the state) to /var then every 15 minutes crontab a diff of the
> 2 files. If anything happens you get an email warning you of the
> issue. Of course make sure the email goes somewhere useful.
I am going with ext3 until there is a really good motivation to go
towards something else. Good point about the diff'ed mdstats, that's
probably just good everyday practice... thanks.
> > I feel pretty good about what I've done already, but TBs of data
> > me wince, especially if the data is expensive and/or irreplacable.
> > Dealing with that sorta stuff makes me want to make extra-specially
> > doubly sure that all my duckies are in rows...
> Monitor the raid for failures and make backups. I'd recommend a UPS
> and a redundant powersupply as well.
I have a triple supply on the drive cabinet and a double supply on the
box, all fed by UPS.
Now to get the arrays syncing with our tape cabinet... or something
else that is a good backup solution.
Thanks for all the suggestions Bill... 'preciate it.
> Bill Broadley
> Computational Science and Engineering
> UC Davis
> vox-tech mailing list
Of course the people don't want war. But after all, it's the leaders
of the country who determine the policy, and it's always a simple
matter to drag the people along whether it's a democracy, a fascist
dictatorship, or a parliament, or a communist dictatorship.
Voice or no voice, the people can always be brought to the bidding
of the leaders. That is easy. All you have to do is tell them they
are being attacked, and denounce the pacifists for lack of patriotism,
and exposing the country to greater danger.
Do you Yahoo!?
Y! Messenger - Communicate in real time. Download now.
vox-tech mailing list