Hi Eliezer
i am just facing the caching problem with squid 3.2 or higher . squid
3.1 is doing fine .
> > i have 4 caches 3 of squid 3.1 and the 4th one is using squid 3.2;
i tried 1 worker and 4 workers and 3.1 versions are caching almost twice
there is too many posts to the same problem this is one of them :
>>squid-web-proxy-cache.1019090.n4.nabble.com/Caching-in-3-2-vs-3-1-td4472480.html#a4662974
i can see now with my caches that squid 3.2 is more faster and can
handle more but cache less ...!!!!
when i reinstall older squid (3.1) instead of 3.2 i gain more cache
immediately.
Best Regards
Ayham
>>Hi Amos
>>I am trying to make the test but i have very busy caches so i cant do
the test on productive environment . we are >>installing a testing
environment . any way there is an old post for the same problem and
there is there any answer ????
>>squid-web-proxy-cache.1019090.n4.nabble.com/Caching-in-3-2-vs-3-1-td4472480.html#a4662974
>>thanks
>>Ayham
> On 30/10/2013 9:10 p.m., Ayham Abou Afach wrote:
> >
> >
> > On 30/10/2013 2:51 a.m., Ayham Abou Afach wrote:
> >> Hi
> >>
> >> i have the folloing problem after moving from squid 3.1 to ( 3.2
or 3.3
> >> ) with same config
> >> bandwidth saving decreases to about 50%
> >> what is the deffirance between versions related to caching
behaviour ???
> >>
> >> any one has a solution to this problem...
> >>
> >> Regards
> >> On 10/29/2013 11:38 PM, Amos Jeffries wrote:
> >> The big caching related changes:
> >>
> >> * 3.2 version is now HTTP/1.1 - with extended cacheability and
revalidation behaviour.
> >> - In some cases HTTP/1.0-based savings calculation can show a
decrease even as total bandwidth is reduced.
> >> - More cacheable content (in HTTP/1.1 almost anything is
cacheable) can mean more spare HIT rate.
> >> - avg store object age, size, near-HIT ratio also need to be
considered more important for HTTP/1.1
> >> - NP: Several of the refresh_pattern ignore-* and override-*
options cause *reduction* in HTTP/1.1 compliant caches.
> > So that means that the traffic will be mor validated in HTTP/1.1
thats why we are losing some objects to be cached
> > i am using refresh patterns without any effect
> >> * 3.2 version is validating intercepted traffic safety.
> >> - Unsafe traffic will not be cached.
> > all my traffic is passing to my 3.2 box as spoofed traffic with
tproxy, but what do u mean by unsave traffic ???
> >
>
> Traffic where the HTTP layer details say the request is going to a
URL which apparently exist on IP(s) different from where the client is
going.
> Were they hijacked and fetching malware? or is the DNS simply
rotating or geo-IP? Squid cant tell the difference and it is unsafe to
allow the malware risk to be cached for spreading to all your clients by
the proxy.
> NP: at this point Squid caching not segmented by client IP address.
But that would not help much anyway as each fetch would just be filling
more cache space without adding HITs for other clients.
>
> >> * 3.2 cache size calculations have been updated.
> >> - Uncovering a bug where the maximum_object_size directive must
be placed above the cache_dir for it to have any effect raising the
object size limit on those cache_dir.
> >> Amos
> >>
> >
> > i am aware for this bug and have 2GB objects in my cache and i
think it is the limit by squid even u set it for 4 G in config
> >
> >
> >
> > what we should know is there any config should be added to 3.2
config file to compensate the reduction in bandwidth saving ????
> >
> > i have 4 caches 3 of squid 3.1 and the 4th one is using squid 3.2;
i tried 1 worker and 4 workers and 3.1 versions are caching almost twice
>
> The answer to that depends on the specific URLs which have stopped
caching. Can you locate some requests which are cached by 3.1 but not by
3.2 and retrieve the server HTTP headers? (debug_options 11,2 dumps the
HTTP traffic headers to cache.log)
>
> Amos
>
>
Received on Thu Nov 07 2013 - 08:40:54 MST
This archive was generated by hypermail 2.2.0 : Thu Nov 07 2013 - 12:00:35 MST