Hi folks.
Apologies in advance if this is already covered in the docs or FAQ, I
wasn't able to find an answer.
I'm trying to use squid as a reverse proxy to cache honkingly large
files, in the 200-300GB range.
I've been able to get it to work for files up to around 125GB, but am
seeing failures above that level. Is this a legitimate or known
limitation? If not, is there a reference somewhere I can look at to
figure out what I'm doing wrong?
I've been testing on a quad core mac pro running leopard. I built
squid from 3.0 stable 4 source (w/gcc 4.0.1), and configured w/ the
following flags:
-----
configure --with-build-environment=POSIX_V6_LPBIG_OFFBIG
--with-pthreads --with-default-user=squid --with-large-files --en
able-async-io=32 --prefix=/usr/local/squid
------
I've been testing with a squid.conf as follows:
------
http_port 8000 defaultsite=127.0.0.1
access_log /opt/local/var/squid/logs/access.log squid
acl apache rep_header Server ^Apache
acl port8000 port 80
http_access allow port8000
always_direct allow localhost
coredump_dir /opt/local/var/squid/cache
cache_dir ufs /opt/local/var/squid/cache/ 250000 20 256
maximum_object_size 250000000000
-------
Anyone have any suggestions?
Thanks,
Mike
Received on Mon Apr 07 2008 - 13:22:50 MDT
This archive was generated by hypermail 2.2.0 : Thu May 01 2008 - 12:00:04 MDT