> I'm having serious memory problems with Squid v1.1.22 on Digital Unix 4.0D
> (yes I know I should upgrade to v2.0 but this is in a production
> environment).
>
> The problem is that the Squid process keeps growing until eventually it
> starts swapping. I have compiled with gcc 2.8.1 and gnumalloc, plus I have
> followed the FAQ's suggestions on reducing memory usage but to no avail.
I'm using the same version of squid; I see this problem too. However, I
let it run even though it's swapping; squid's memory use grows until the
system runs out of memory (real and virtual), at which point squid dies
with an IOT trap. I can run for over a day or only a couple of hours,
but it always (for the two weeks I've been using it) dies.
According to doc/Release-Notes-1.1.txt, memory use consists of:
- metadata for every object in cache
- in-memory objects
- in-memory objects include completed objects and
in-transit objects
- 'other data structures, unaccounted memory, and malloc()
overhead'
The problem may be explained by something that appears a bit later in
the documentation:
'The in-transit objects are "locked" in memory until
they are completed'
I gather that this means that even if I set maximum_object_size to a
relatively small value, many large objects in transit can cause memory
use to increase dramatically.
The documentation also says there's a "delete behind" mode:
which means Squid releases the section of the object which has
been delivered to all clients reading from it
I speculated that for some reason (many clients reading the same set of
large objects, incorrect configuration by me, buggy code) the delete
behind mode wasn't working.
Accordingly, I decided to reduce "normal" memory demands by chopping my
cache from 3.2GB to 1GB and reducing cache_mem from 16MB to 12MB. Also,
because our developers often retrieve the same large packages many
times and because maximum_object_size doesn't affect memory usage, I've
set it back to 100MB
That was two days ago; I haven't seen the problem since then. If I'm
right I haven't fixed the problem, I'm just avoiding it until enough
clients try to grab enough large objects.
So, am I crazy or what? Does this make sense?
-- Claude Morin System Administrator ISG Technologies Inc. Mississauga, Ontario, CanadaReceived on Fri Oct 09 1998 - 09:13:13 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:42:24 MST