Greetings,
Just a few quick questions.. these have been bothering me for a while -
they may be stupid questions, but they bother me.
The excerpt below is from the Release-Notes v1.1..
*************
given:
DS = amount of 'cache_swap' / number of 'cache_dir's
OS = avg object size = 20k
NO = objects per L2 directory = 256
calculate:
L1 = number of L1 directories
L2 = number of L2 directories
such that:
L1 x L2 = DS / OS / NO
************
Now, should the last line be:
L1 x L2 = ( DS / OS ) / NO
or
L1 x L2 = DS / ( OS / NO )
?
I'm assuming that it is: L1 x L2 = ( DS / OS ) / NO
So if I had 4 * 6.2Gb cache_dir's with avg object size 20k, then a
suitable L1 x L2 combination would be:
16 x 128 = ( 6200 / 0.02 ) / 256
2048 = 1211
Is this approximately correct? I figured that the LHS being larger rather
than smaller is better. Or would it be better to use 16 x 96?
Question number 2:
This cache will run on a P166/Linux and will have 128M RAM to start with
- I'm considering running Squid-NOVM. The max requests per hour will be
~3000. Since the request numbers are small, I'm planning on using
ULTRA-DMA/33 EIDE drives - not SCSI - and the machine will only be running
Squid.
Filesystem blocksize will be 8k. And as stated before, there will be 4 *
6.2gb cache_dirs.
Correct me if I'm being irrelevent, but gnumalloc will be used (is
dlmalloc better, or have I lost the plot?).
Is the above a healthy way to go? Have I overlooked something?
Any advice and comments on the suitability of NOVM and the figures above
would be greatly appreciated.
Thanks in advance..
Umar.
-- Umar Goldeli SYNFLUX International P/L. P.O. Box 98, Five Dock, N.S.W. 2046, Australia e-mail: umar@synflux.com.au Phone: +612-9712-2411 WWW : http://www.synflux.com.au Fax : +612-9712-2399Received on Mon Oct 06 1997 - 15:39:10 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:37:15 MST