Peter Jenny wrote:
>
> boundary="==_Exmh_11592538600"
>
> This is a multipart MIME message.
>
> --==_Exmh_11592538600
>
> I thought by eliminating hard disks Squid would perform much faster, but
> I'm only seeing ~110 responses/second from my lab test configuration:
>
> Sun Ultra 5, 270 MHz, 1 GB RAM (remove the floppy disk and you can put 4 tall
> 256 MB "Ultra 10" DIMMs in an Ultra 5)
> Solaris 2.7
> 1x4 GB IDE disk, not used for caching or swapping
> Squid Cache: Version 2.2.STABLE5
> cache_dir /squid_cache 256 16 256
> (or cache_dir /squid_cache 768 16 256; made no difference)
> cache_mem 64 MB
>
> /squid_cache is a Solaris tmpfs type file system which uses swap space, which
> in turn uses RAM first, and only if it runs out of RAM will it use hard disk
> space that is defined as part of swap -- and I have no hard disk space defined
> as swap:
> > swap -l
> No swap devices configured
> > egrep swap /etc/vfstab
> swap - /tmp tmpfs - yes -
> swap - /squid_cache tmpfs - yes -
>
> Squid logging is minimized:
> cache_access_log /dev/null
> cache_store_log none
>
> Am using Polygraph 1.3.1 with roughtly the "datacomm-1" benchmark settings to
> generate load and report results (mimor modification I made include reducing
> launch_win & decreasing the goal because I usually don't want to wait for 1
> hour of results as I tweak squid.conf). If I increase the offered load above
> 110-120 operatons/sec, Squid can't keep up and pretty soon polyclt crashes
> because of too many open connections.
>
> Squid uses about 88% of the CPU while running. I know a 270 MHz UltraSPARC
> isn't the latest or fastest (even within the Sun product line, not to mention
> Intel/AMD), but I hoped without hard disks to slow it down that Squid would do
> a lot better.
I think you'll find that tmpfs is not a good choice for this, and IIRC,
doesn't behave entirely in the expected way.
D
Received on Wed Dec 15 1999 - 21:09:43 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:49:54 MST