On Wed, 5 Aug 1998, Bill Wichers wrote:
> First off, I've been noticing in my trusty MRTG graphs that the digest
> transfers that squid does hourly appear to take up a respectable amount of
> bandwidth during the short period over which they are transfered.
True; I will try to make it even smoother in b24. The current setup
simplifies debugging/understanding but is suboptimal.
> I can
> understand that it is of benefit to keep the digest as up to date as
> possible and thus it needs to be transferred quickly (no shaping) and at
> regular intervals.
I do not think there such a requirement. Individual digests are independent
and their refresh times could vary. Most of this stuff should be
configurable, of course.
> Has anyone thought to maybe compress the digest, and
> send the compressed digest out to all the caches that want it?
Good digests are not compressible. Ideally, we should have 50% of bits "on",
and the position of those bits should be random. In practice, the
"utilization" is a bit lower than 50%. When I tried last time, I got about 6%
compression ratio with "gzip -9" on a "good" digest. If you digests have
utilization close to 50%, you will not gain much from compressing them.
Transmitting "deltas" or "diffs" is another question; as is tuning digests to
have 50% utilization.
> I would
> think it would be possible to compress the digest once per hour and
> transfer it to all the squids automatically... Although this would make a
> pretty big spike in the parent's outgoing traffic... Hmm.
Pushing digests to children is possible. Look for Pei Cao's Summary Cache as
an example. However, for administrative and hierarchical reasons, pulling is
probably better. The performance advantage of pushing is questionable until
multicast is in place.
Alex.
Received on Tue Aug 04 1998 - 23:00:11 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:41:27 MST