> I don't use the aggregate settings because they do not have any effect when
> individual pools are set.
> The individual settings more or less let the host fetch "immediately" pages
> that are slightly larger than 24000 bytes. If the host tries to fetch a
> large page then Squid after the initial burst (~24KB) it will slow down to
> the rate defined by the increment value defined by
> "delay_class2_individual_restore" (i.e. 1200 bytes/sec).
If I understand this correctly, delay pool requests will be throttled
even when it results in Squid not fully using its input side
bandwidth. This is probably reasonable for an ISP's cache, but for a
small corporate cache, where phone time is metered, it would be nice
if there was a mechanism that would give otherwise idle bandwidth to
users even though they had exceeded their delay pool throughput
limit. One could then apply limits to all sites except those from
which large, business related, service packs need to be downloaded,
without suffering artifically extended phone calls.
Am I right in saying this mode is not supported, and has any thought
been given to it.
I can foresee some implementation difficulties if one tries to
account for idle time due to slow high priority sources, but a rule
based on their being only throttled requests pending might still be
useful.
(It's possible to reconfigure to limit bandwidth during a service pack
download, but the logistics are not that straightforward for a
function that is not the core of the business.)
-- David Woolley - Office: David Woolley <djw@bts.co.uk> BTS Home: <david@djwhome.demon.co.uk> Wallington TQ 2887 6421 England 51 21' 44" N, 00 09' 01" W (WGS 84)Received on Tue Dec 01 1998 - 12:31:33 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:43:32 MST