On Mon, 8 Feb 1999, Tony Demark wrote:
> I am currently using Squid 1.1 on Solaris 2.6 to accelerate a dynamic
> database-driven website and I wondering about the following situations:
>
> (1) Squid "Appliance"
>
> Our current setup has an Ultra 1 with 4 virtual interfaces (with different IP
> addresses) - one for each independant web site. The web servers run on port 81
> while squid runs on port 80 in accelerator mode configed to handle 'virtual'
> hosts on port 81. This configuration forces squid to run on the same machine
> as the web servers. Is there a way to accelerate a set of websites with a
> Squid "appliance" machine sitting in front of N number of webservers with only
> one instance of squid on the "appliance"? There doesn't seem to be a way in
> 1.1 and I can't find any documentation to that effect on 2.1 or 2.2. Assuming
> the feature is not there, how hard would it be to add? (Maybe the addition of
> a virtual_host mapping where you can map incoming IP addresses to downstream
> servers:ports would be the ticket)
As long as you have the host header part of the requested URL then you can
do it with a simple redirector. Indeed you don't even need squid to be
listening on more than one IP address since it can discriminate between
sites based on domain.
> (2) Sibling relationship on accelerated servers
>
> Are there any problems with setting two squids that are in front of two
> identically configured (but different IP addressed) servers to treat each
> other as siblings?
>
> For example:
>
> www = 192.168.42.100, 192.168.42.200
> squid 2 has -> http://192.168.42.200:81/foo/bar.html
> squid 1 gets request -> http://192.168.42.100:81/foo/bar.html
>
> squid 1 asks squid 2 for object, squid 2 returns /foo/bar.html (even
> though the request came to a different IP address)
>
> Will this work or will the differing IP addresses prevent this from occuring?
To be honest, I'm not sure why you would want your accelerators to fetch
from each other in this fashion. If you don't have it in local cache,
then fetching from another cache or from the backend should have
similar cost (barring gross overload on the backend server).
That said I don't see why it shouldn't work. Again in the scenario you
describe I would be using a redirector so that each frontend roundrobins
requests from each of the backends in turn. :)
> Thanks,
> - Tony
John
Received on Mon Feb 08 1999 - 09:58:33 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:44:29 MST