Hi
I am not sure if someone replied to you - It seems not... sorry for the
late reply - I got removed from the list...
-----You wrote---------
Hello All.
What hardware do I need for running squid if squid process from 20
until 1000 requests per second?
--------------
Firstly - I assume that you do mean "a second" not "a minute"
(even if you didn't, this may be interesting... anyway :)
20 requests a second is about 1200 requests a minute... 1000 requests is
60000 a minute.... quite a load. If you consider that the average size
of a http request is somewhere around 6-8k that's about
6144000 bytes a minute, 6M a minute. (you'll need Fast-ether or FDDI,
since that means that it's doing up to 6M in and 6M out at the same
time) Thats 1440000 hits a day.
I qould guess that to handle this you would have to have either a really
high end box or some kind of cluster solution... possibly a "RAIM"
(redundant array of inexpensive machines).
A P166 with 128M of ram, linux, scsi using multiple 4gig barracuda disks and
software raid can handle about 300 000 requests a day without problems,
you can possibly even speed this up with some of the things mentioned in
by Stewart Forster in
http://squid.nlanr.net/Mail-Archive/squid-users/current/333
Technically then you could probably put in 4.8 machines of the above spec and
use multiple DNS entries to load balance between them...
(ie > host cache.is.co.za
cache.is.co.za has address 196.4.160.11
cache.is.co.za has address 196.4.160.73
cache.is.co.za has address 196.4.160.79
cache.is.co.za has address 196.4.160.74
cache.is.co.za has address 196.4.160.19
Simply put multiple A records in)
If you want to use ICP between the machine it's going to load them more,
but you will get a pretty serious hit rate... you might want to put in
another 2 machines or so. These can even be redundant machines which you
can simply do "ifconfig eth0 down" on a machine if it dies and go
"ifconfig eth0 old-machines-ip-address" on an unused machine... (and reset
arp addresses on your routers etc)
Getting one huge machine is another alternative.. but you will probably
run into all sorts of other limitations, such as the "per filehandle" limit,
for example... having a lot of small machines would probably be your best
bet... Give me a shout if I can be any help
You should probably look at (you have know idea how long it took me to find
this link, but I was searching for the wrong thing :)
http://www.muc.de/~hm/linux/HA/High-Availability-HOWTO.html
http://www.cs.cornell.edu/home/mdw/hpc/hpc.html may have something
(probably not)
Oskar
Received on Fri May 30 1997 - 13:44:52 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:35:18 MST