Re: Cache Overfilling on SOLARIS

From: Robert Collins <robert.collins@dont-contact.us>
Date: Tue, 18 Jul 2000 07:50:23 +1000

There is a known bug (which has been patched
http://www.squid-cache.org/Versions/v2/2.3/) for Squid2.3Stable3 and disk
space. Either get one of the Squid 2.3 snapshots, or the patch
http://www.squid-cache.org/Versions/v2/2.3/bugs/#squid-2.3.stable3-storeExpi
redReferenceAge

However as your 2.2stable 5 caches are overfilling I'd look at your
cache_dir entries.

Rob

----- Original Message -----
From: "Murphy Terrance M" <MurphyTerranceM@JohnDeere.com>
To: "'squid-users@IRCACHE.NET'" <squid-users@ircache.net>
Sent: Tuesday, July 18, 2000 5:03 AM
Subject: Cache Overfilling on SOLARIS

>
> Greetings all,
>
> I am blessed as the administrator of a busy squid cache. We have three
> Solaris boxes, at the moment two on 2.2STABLE5 and one on 2.3STABLE3. All
> three are load
> balanced behind one IP and funtion as a single parent proxy.
>
> During this time I have been fighting squid's tendency to overfill the
cache
> ( all squid versions ) which will cause repeated (hundreds) of "Terminated
> abnormally" messages in the cache.log files in a very short time. Errors
> look like this in 2.3STABLE3:
>
> --------------------------------------------------------
> diskHandleWrite: FD 6: disk write error: (28) No space left on device
> FATAL: Write failure -- check your disk space and cache.log
> Squid Cache (Version 2.3.STABLE3): Terminated abnormally.
> CPU Usage: 13.210 seconds = 3.160 user + 10.050 sys
> Maximum Resident Size: 0 KB
> --------------------------------------------------------
>
>
> I have read many messages concerning like problems in this forum but have
> not been able to cure my problem. The only thing that works is to newfs
the
> cache directories and run squid -z and start over. It happens so often
that
> I've written a ksh srcript to do the whole rebuild automatically ( except
> OK'ing the newfs for each partition !! ). I'm hoping that someone here
> might see what I'm missing.
>
> Details
> ----------
> The boxes are Sun E3000's each with 2-3 cpu's and 512-1024 ram. Disks are
> remote high-speed. I have noticed that at times if I remove one offending
> partition, the remaining ones might function normally. More often than
> not,
> however, all partitions will crap out about the same time (since they all
> reach capacity at the same time).
>
> Each of the three caches consist of 12-14 2gig partitions. Each 2 gig
> partition is adminisered by Solaris's disk tool, SOLSTICE, as a journaled
> filesystem with plenty of jounaling space ( according to Sun's
directions ).
> I'm pretty sure that I have the new cache_dir directive correct in v2.3.
> I've tried 2.3 with and without the external dnsservers.
>
> For these three boxes Squid's cachemgr.cgi reported a total of 131 hit/sec
> average over a five minute period during rush hour today. Things still
work
> when I have to take on machine down to newfs the cache, but don't move
very
> fast. I'm building another box to add to this parent cluster. I'm
thinking
> about not using the SOLSTICE disk manager for the cache dirs on this one,
> just using the raw devices. Anyone have an opinion on this option? Or any
> other
> thoughts?
>
>
> Terry Murphy
> Network Public Access
> Deere and Company
> 309-765-0325
> MurphyTerranceM@JohnDeere.com
>
> Terry Murphy
> Network Public Access
> Deere and Company
> 309-765-0325
> MurphyTerranceM@JohnDeere.com
>
>
>
Received on Mon Jul 17 2000 - 15:48:42 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:54:32 MST