On Mon, 3 Aug 2009 03:20:53 +0700, ░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░
<mirza.k_at_gmail.com> wrote:
> client ---> mikrotik ------> Internet
> ..................|............................
> ...............Squid Server
>
> Client IP : 192.168.1.xxx
> Client gw 192.168.1.253 ( mikrotik LAN ip )
>
> Squid server ip : 10.0.0.1
>
> Mikrotik IP ( NIC that connected to SQUID ) : 10.0.0.2
> ------------
>
> Problem is i got this alot
> 1249243846.862 28460 192.168.1.123 TCP_MISS/000 0 GET
> http://mail.google.com/ - DIRECT/mail.google.com -
>
The connection closed before any data was received back from
mail.google.com.
>
>
> ## my squid.conf
> cache_peer 10.0.0.2 sibling 3128 0 no-query no-digest default
> cache_peer 192.168.1.0/24 sibling 3128 0 no-query no-digest default
The above line is not right.
cache_peer is a single _proxy_ or _web server_ where Squid may fetch data.
>
> http_port 3128 transparent
Fairly bad idea. Squid will spam its logs with NAT failures and do extra
work slowing things down trying to figure out who the real source is on
normal port-3128 requests.
Since you have cache_peer linking the two proxies through this port they
are going to get a moderately large amount of normal non-transparent
requests come in it.
> #http_port 3128
> hierarchy_stoplist cgi-bin ?
> #acl QUERY urlpath_regex cgi-bin \?
> #no_cache deny QUERY
> cache_mem 400 MB
> cache_swap_low 70
> cache_swap_high 90
Cache of 10400 MB * 20% garbage collection chunk = 2.08GB of disk files
removed during garbage collection. Which may occur as often as every 5
minutes or so. I would expect your squid to seriously slow down while
deleting 2080 MB of cached objects whenever the cache reaches 90% full.
I advise people with caches of 10GB or above to shrink the difference
between their cache_swap_low and cache_swap_high so they only differ by 1
(minimum difference Squid currently allows). The garbage collection _will_
slow squid down somewhat for its duration, and may discard objects which
are still useful when emptying very large chunks of cache.
> dead_peer_timeout 10 seconds
>
If set shorter than the mean request rate of your incoming requests this
will cause higher DIRECT sourced requests than otherwise.
http://www.squid-cache.org/Doc/config/dead_peer_timeout/
> ipcache_size 1024
> ipcache_low 98
> ipcache_high 99
> cache_replacement_policy heap LFUDA
> memory_replacement_policy heap GDSF
> maximum_object_size_in_memory 50 KB
> maximum_object_size 50 MB
>
>
> cache_dir aufs /var/spool/squid 10000 23 256
> cache_access_log /var/log/squid/access.log
> cache_log /var/log/squid/cache.log
>
> log_fqdn off
> log_icp_queries off
> cache_store_log none
> #emulate_httpd_log on
> pid_filename /var/run/squid.pid
> reload_into_ims on
> pipeline_prefetch on
> vary_ignore_expire on
>
> memory_pools off
> query_icmp on
> #quick_abort_min 0
> quick_abort_min -1
> quick_abort_max 0
> quick_abort_pct 98
> negative_ttl 1 minute
The above will cause Squid to store and continue sending 4xx and 5xx error
pages for a minute after the first sighting. This can cause a visible
connection problem to keep showing to clients (extended denial of service)
even if it's was only for one individual request out of thousands.
Servers not sending expiry information on error pages are not a problem to
Squid and only DoS themselves during times of bad errors.
> half_closed_clients off
> read_timeout 5 minute
> request_timeout 1 minute
> client_lifetime 360 minute
> shutdown_lifetime 10 second
>
> acl all src 0.0.0.0/0.0.0.0
> acl manager proto cache_object
> acl localhost src 127.0.0.1/255.255.255.255
> acl client src 192.168.1.0/255.255.255.0
> acl client src 10.0.0.0/255.255.255.0
> acl to_localhost dst 127.0.0.0/8
> acl PURGE method PURGE
> acl SSL_ports port 443 563
> acl Safe_ports port 80 # http
> acl Safe_ports port 21 # ftp
> acl Safe_ports port 443 563 # https, snews
> acl Safe_ports port 70 # gopher
> acl Safe_ports port 210 # wais
> acl Safe_ports port 1025-65535 # unregistered ports
> acl Safe_ports port 280 # http-mgmt
> acl Safe_ports port 488 # gss-http
> acl CONNECT method CONNECT
> http_access allow manager all
> #http_access deny manager
> http_access deny !Safe_ports
> http_access deny CONNECT !SSL_ports
> http_access allow PURGE localhost
> http_access deny PURGE
> http_access allow localhost
> http_access allow client
> http_access deny all
> http_reply_access allow all
> icp_access allow client
Very weird.
Your cache_peer sibling is at 10.0.0.2 yet only other caches in the
192.168.1.0/24 network are allowed to check if your proxy has cached an
object.
>
> acl my_other_proxy src 10.0.0.2
> follow_x_forwarded_for allow localhost
> follow_x_forwarded_for allow my_other_proxy
>
> #miss_access allow all
> cache_mgr mirza.k_at_gmail.com
> cache_effective_user proxy
> cache_effective_group proxy
> visible_hostname private.server
This is the _public_ name for Squid.
Check that unique_hostname at least has a valid FQDN that can be used for
loop detection.
> logfile_rotate 1
> forwarded_for on
> buffered_logs on
> client_db off
> strip_query_terms off
> coredump_dir /var/spool/squid
> #tcp_outgoing_tos 0x30 localnet
> zph_mode tos
> zph_local 0x30
> zph_parent 0
> zph_option 136
>
> refresh_pattern ^ftp: 1440 20% 10080
This refresh_patttern is _required_ to handle dynamic content safely when
caching it:
refresh_patterm -i (/cgi-bin/|\?) 0 0% 0
> refresh_pattern . 0 40% 40320
>
>
> fqdncache_size 4096
>
> #my script
> refresh_pattern -i \.flv$ 10080 90% 999999 ignore-no-cache
> override-expire ignore-private
The above line will never work. It MUST be above the default
refresh_pattern values.
>
> acl youtube dstdomain .youtube.com
> acl googlevideo dstdomain video.google.com
> cache allow youtube
> cache allow googlevideo
Amos
Received on Sun Aug 02 2009 - 23:17:36 MDT
This archive was generated by hypermail 2.2.0 : Mon Aug 03 2009 - 12:00:02 MDT