l i n u x - u s e r s - g r o u p - o f - d a v i s
Next Meeting:
July 7: Social gathering
Next Installfest:
Latest News:
Jun. 14: June LUGOD meeting cancelled
Page last updated:
2004 Oct 18 13:55

The following is an archive of a post made to our 'vox-tech mailing list' by one of its subscribers.

Report this post as spam:

(Enter your email address)
Re: [vox-tech] RAID systems
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [vox-tech] RAID systems

On Fri, Oct 15, 2004 at 02:08:42PM -0700, Jan W wrote:
> Hi all:
> I just finished setting up 3 tb raid systems, and I have some general
> questions about raid:
> 1.  What are the best methods to get benchmarks for speed/timing?

Measuring a real world workload in real world conditions.  Short
of that I'd recommend bonnie++ and "PostMark: A New File System Benchmark"

> 2.  What are the best recovery tools in case of failure?

Tape backups, preferably offsite.  Other than that as long as you don't
lose more disks then you have redundancy for it's just a matter of
rebuilding onto the replaced disks.

Of course if you had a user error, os error, major hardware problem,
flood, electrical issue, malicious user, theft, earthquake, fire, etc
you could be out of luck.

> 3.  What filesystem/raid options work best for lots of small files
> being written often?

lots often?  Care to quantify these?  Ext3 isn't a bad place to start
and I believe is the most heavily tested.  Reiserfs seems to specialize
in huge directories of small files.  XFS seems to specialize in huge
files and high bandwidths.  Is your application disk limited?  The
postmark benchmark above will let you quantify the performance of
a particular mixture of filesizes reads and writes.

In some cases raid-1 can make more sense performance wise (of course
at a higher disk space overhead).

> Here is the setup I have now:
> 2.6 kernel (fedora core)
> mdadm for managing metadevice
> raid 5 -- default 64k chunk size
> SATA for all raid drives
> ext3 fs with -- extra inodes for lotsa little files and -- 4k block

I believe ext3 will allocate additional inodes as needed, no need to

> size with a stride of 16 so that it matches the raid chunk size

Do you have an existing production system? How many files total?
How many files per directory (average), what is the average filesize?

I wouldn't explicitly set the block size unless you have hard numbers
to prove the optimal choise.

> Any and all comments are welcome, I just need to get a better idea of
> what people are using for their production systems (and how the tools
> are used).  I don't need anything special, just a rock-solid file
> server used by everyday production systems.

IMO ext3.  One quick trick is to cp /proc/mdstat (once you are happy
with the state) to /var then every 15 minutes crontab a diff of the
2 files.  If anything happens you get an email warning you of the
issue.  Of course make sure the email goes somewhere useful.

> I feel pretty good about what I've done already, but TBs of data makes
> me wince, especially if the data is expensive and/or irreplacable. 
> Dealing with that sorta stuff makes me want to make extra-specially
> doubly sure that all my duckies are in rows...

Monitor the raid for failures and make backups.  I'd recommend a UPS
and a redundant powersupply as well.

Bill Broadley
Computational Science and Engineering
UC Davis
vox-tech mailing list

LUGOD Group on LinkedIn
Sign up for LUGOD event announcements
Your email address:
LUGOD Group on Facebook
'Like' LUGOD on Facebook:

Hosting provided by:
Sunset Systems
Sunset Systems offers preconfigured Linux systems, remote system administration and custom software development.

LUGOD: Linux Users' Group of Davis
PO Box 2082, Davis, CA 95617
Contact Us

LUGOD is a 501(c)7 non-profit organization
based in Davis, California
and serving the Sacramento area.
"Linux" is a trademark of Linus Torvalds.

Sponsored in part by:
Appahost Applications
For a significant contribution towards our projector, and a generous donation to allow us to continue meeting at the Davis Library.