Hi Dererk!
Add the "ignore-reload" option on cache refresh policies:
Looks like this:
refresh_pattern . 0 20% 4320 ignore-reload
Martin
-----Original Message-----
From: dererk_at_mail.buenosaireslibre.org [mailto:dererk_at_mail.buenosaireslibre.org]
Sent: Wednesday, July 14, 2010 12:14 PM
To: squid-users_at_squid-cache.org
Subject: Suspicious URL:[squid-users] Forcing TCP_REFRESH_HIT to be answered from cache
Hi everyone!
I'm running a reverse proxy (1) to help my httpd to serve content fast
and
avoid going to the origin as much as possible.
Doing that, I found I made a _lot_ of TCP_REFRESH_HIT requests to
origin, although I've an insane 10-year-long expiration date set on my
http response headers back to squid.
Although I did verify that, using wget -S and some fancies tcpdump
lines,
I wanted to get rid of any TCP_REFRESH_HIT request, main reason is
because there's no way some objects change, so requesting for freshness
has no sense moreover increases server load (1/7 are refresh_hit's).
I used refresh_pattern with override-expire and extremely high values
for min and max values, with absolutely no effect.
For the record, If I use offline_mode I obtain partially what I wanted,
unfortunately I loose the flexibility of the regex capacity that
refresh_pattern has, which I need for avoiding special objects.
I've enabled debug for a blink of an eye, and got a request that goes as
TCP_REFRESH_HIT, and as for what I understand, appears to be answered as
being stale and requested back to origin.
2010/07/14 13:35:58| parseHttpRequest: Complete request received
2010/07/14 13:35:58| removing 1462 bytes; conn->in.offset = 0
2010/07/14 13:35:58| clientSetKeepaliveFlag: http_ver = 1.0
2010/07/14 13:35:58| clientSetKeepaliveFlag: method = GET
2010/07/14 13:35:58| clientRedirectStart: 'http://foobar.com/object'
2010/07/14 13:35:58| clientRedirectDone: 'http://foobar.com/object'
result=NULL
2010/07/14 13:35:58| clientInterpretRequestHeaders: REQ_NOCACHE = NOT
SET
2010/07/14 13:35:58| clientInterpretRequestHeaders: REQ_CACHABLE = SET
2010/07/14 13:35:58| clientInterpretRequestHeaders: REQ_HIERARCHICAL =
SET
2010/07/14 13:35:58| clientProcessRequest: GET
'http://foobar.com/object'
2010/07/14 13:35:58| clientProcessRequest2: storeGet() MISS
2010/07/14 13:35:58| clientProcessRequest: TCP_MISS for
'http://foobar.com/object'
2010/07/14 13:35:58| clientProcessMiss: 'GET http://foobar.com/object'
2010/07/14 13:35:58| clientCacheHit: http://foobar.com/object = 200
2010/07/14 13:35:58| clientCacheHit: refreshCheckHTTPStale returned 1
2010/07/14 13:35:58| clientCacheHit: in refreshCheck() block
2010/07/14 13:35:58| clientProcessExpired: 'http://foobar.com/object'
2010/07/14 13:35:58| clientProcessExpired: lastmod -1
2010/07/14 13:35:58| clientReadRequest: FD 84: reading request...
2010/07/14 13:35:58| parseHttpRequest: Method is 'GET'
2010/07/14 13:35:58| parseHttpRequest: URI is '/object'
In the way of checking anything to get some effect, I also gived a try
to ignore-stale-while-revalidate override-lastmod override-expire
ignore-reload ignore-no-cache, pushed refresh_stale_hit high in the sky,
and again, no effects :-(
What I'm doing wrong? Is there any other way to avoid REFRESH_HITs from
being performed?
Greetings,
Dererk
ref:
1. Squid Cache: Version 2.7.STABLE7
configure options: '--prefix=/usr/local/squid'
'--bindir=/usr/local/bin' '--sbindir=/usr/local/sbin'
'--sysconfdir=/etc/squid' '--localstatedir=/var'
'--mandir=/usr/local/man' '--infodir=/usr/local/info'
'--disable-internal-dns' '--enable-async-io'
'--enable-storeio=aufs,ufs,coss' '--with-large-files' '--enable-snmp'
'--with-maxfd=8192' '--enable-htcp' '--enable-cache-digests'
This message and the information contained herein is proprietary and confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp
Received on Wed Jul 14 2010 - 23:47:56 MDT
This archive was generated by hypermail 2.2.0 : Thu Jul 15 2010 - 12:00:04 MDT