l i n u x - u s e r s - g r o u p - o f - d a v i s
L U G O D
 
Next Meeting:
July 21: Defensive computing: Information security for individuals
Next Installfest:
TBD
Latest News:
Jul. 4: July, August and September: Security, Photography and Programming for Kids
Page last updated:
2005 Feb 21 12:10

The following is an archive of a post made to our 'vox-tech mailing list' by one of its subscribers.

Report this post as spam:

(Enter your email address)
Re: [vox-tech] Three Install Questions
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [vox-tech] Three Install Questions



On Mon 21 Feb 05, 11:14 AM, Rick Moen <rick@linuxmafia.com> said:
> Quoting Peter Jay Salzman (p@dirac.org):
> 
> > 1. Having multiple swaps per disk is of almost no benefit at all.
> 
> So, a client of mine builds Linux servers, and provides a custom RHEL
> 3.0 build image for one particular customer that buys a lot of 4 GB RAM
> dual Opteron systems with dual 73GB SCSI drives.  Customer management,
> for reasons that passeth understanding, insisted they wanted 32GB of
> swap space on each node,

Jumping Jehosephat!!!

> and moreover wanted all of it in a single partition on the boot drive.

core dumped.

> nSuch systems were duly delivered, but
> customer soon reported that the systems were hanging hard while in
> production use.
> 
> Extensive tests followed using the Cerberus Test Control Suite (CTCS),
> which stress-tests Linux boxes using a number of simultaneous software
> processes including iterative kernel compiles.  With a single 32GB swap
> partition, CTCS induced a system hang in 1 day.  With a pair of 16GB
> swap partitions, CTCS hung the box in 2 days.  With four 8 GB swap
> partitions, five days.  And thus up to the ideal configuration, using as
> many 2 GB swap partitions as the limits on SCSI device numbers
> permitted, in which, as far as we could tell, CTCS ran forever.  The
> customer's load image was duly modified, and the cause has been
> presumptively attributed to hitting a bug in the RHEL 3.0 kernel's VM code.
> 
> (Yes, there was indeed no rational reason why customer needed to deploy
> all that swap in the first place.  But that's not the point I was
> making.)
 
But does anyone even KNOW what caused the hang?  Maybe it was the fact that
fdisk was run on Wednesday without the hardware clock reading 3:30 while
Mars and Jupiter were in alignment.  That's the thing with bugs -- you
really don't know anything unless you study the bug.

Other than a story of a poorly understood case where multiple swaps per disk
seemed to surpress the expression of a VM bug, can you describe what benefit
multiple swaps per disk give?

Maybe someone, somewhere can tell a story where multiple swaps per disk
crashed a system due to a filesystem bug.  ;)

Pete


-- 
Save Star Trek Enterprise from extinction: http://www.saveenterprise.com

GPG Fingerprint: B9F1 6CF3 47C4 7CD8 D33E  70A9 A3B9 1945 67EA 951D
_______________________________________________
vox-tech mailing list
vox-tech@lists.lugod.org
http://lists.lugod.org/mailman/listinfo/vox-tech



LinkedIn
LUGOD Group on LinkedIn
Sign up for LUGOD event announcements
Your email address:
facebook
LUGOD Group on Facebook
'Like' LUGOD on Facebook:

Hosting provided by:
Sunset Systems
Sunset Systems offers preconfigured Linux systems, remote system administration and custom software development.

LUGOD: Linux Users' Group of Davis
PO Box 2082, Davis, CA 95617
Contact Us

LUGOD is a 501(c)7 non-profit organization
based in Davis, California
and serving the Sacramento area.
"Linux" is a trademark of Linus Torvalds.

Sponsored in part by:
Sunset Systems
Who graciously hosts our website & mailing lists!