Hello,
i'm going to maintain a fairly large squid installation. We're currently
using squid-1.1.5 on a Ultra1 with 425Meg RAM and a 12G RAID Stripe for
the cache. The setup is stable, and the performance "still" satisfactorily.
Squid is leaking memory, but slowly.
A couple of Questions:
Does it make sense to use gnu-malloc on a solaris machine?
Any improvements using it?
Would a local caching DNS server help, or is a large number of
dnsserver processes enough (currently 30)?
What is the maximum possible cache size on this machine, and
when do we hit the throughput barrier? Currently we're forwarding
approx. 2.5G/day, and this number is rising.
Cache size is 9G. Hit rate ~38%. Is this a good rate? Two fairly
large siblings get a 100% hit rate (of course), i guess this
will raise the total hitrate a good deal. ICP query hitrate
is 14%.
Which version is currently best suited (most stable)? Could
the NOVM version be a good choice? There is still a large number
of unused file desriptors....
Is file system tuning possible and reasonable on a ufs stripe?
Thanx for any hint,
Dirk
-- Dirk Vleugels FTP- & Proxy-Services UUnet Deutschland GmbH Tel. +49 231 972 00 Emil-Figge-Strasse 80 Fax. +49 231 972 1180 44227 Dortmund, Germany Dirk.Vleugels@de.uu.net URL: http://www.uunet.deReceived on Wed Jan 07 1998 - 02:55:52 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:38:20 MST