Re: [squid-users] Re: squid 3.2.0.14 with TPROXY => commBind: Cannot bind socket FD 773 to xxx.xxx.xxx.xx: (98) Address

From: Eliezer Croitoru <eliezer_at_ngtech.co.il>
Date: Mon, 09 Sep 2013 23:19:30 +0300

Hey Nickolai,

I would try to make sense of what you have seen.
The tproxy is a very complex feature which by the kernel cannot bind
double src(ip:port) + dst(ip:port)..
like let say for example the 10.100.1.100 client tries to connect
2.3.4.5 at port 80.
the client tries once for:
10.100.1.100:5455 to 2.3.4.5:80
then let say the client doesn't have the right route and there is a
network problem then the client tries again from:
10.100.1.100:5456 to 2.3.4.5:80
the above client have an issue with the network and the proxy knows that..
the proxy is transparent and needs to re-intercept the same request
twice.. and when the first connection was timedout from the kernel level
then application can drop the connection and do not continue parsing the
request.
the kernel can bind the ip:port of the src to the dst if it knows that
all 80 port traffic is using only the traffic as a route.
in a case this is not the case the client will have troubles and hence a
binding of ip:port to ip:port from the network layer will be a disaster
for couple layers..

SO the kernel manages what the bind will be like..
I dont see how a tproxy enabled system for more then 10,000 cilents can
reach a critical level of commbind unless the cpu and all the lower
levels of the kernel will not be able to handle this level of traffic.

if it's the range thing from the kernel it can be reproduced in a matter
of seconds by lowering it..
This limit is not a rule for the application but it limits the kernel to
what local-ip:port bind when the source machine is the local machine.
this doesn't force the kernel to handle lower amount of connections but
allows the kernel to do less lookup when trying to find a free ip:port
socket to bind to the new connection.

it seems to me like you are using connection tracking on a tproxy system
that doesn't need to do connection tracking at all in this kind of scale..
There is no reason for a tproxy system to keep track on connections of
the client for more then 5-10 minutes tops..

try to look more into the connection tracking rather then the basic
kernel lands..

I would say that the commbind is too much connections then the cpu can
handle with one cpu\core scaled process.

Can you try the old way of load balancing with iptables towards more
then one process load?

Eliezer

On 09/09/2013 02:20 PM, Nikolai Gorchilov wrote:
> On Mon, Sep 9, 2013 at 4:41 PM, Antony Stone
> <Antony.Stone_at_squid.open.source.it> wrote:
>> On Monday 09 September 2013 at 13:08:00, Nikolai Gorchilov wrote:
>>
>>> On Mon, Sep 9, 2013 at 4:15 PM, Nikolai Gorchilov <niki_at_x3me.net> wrote:
>>>> User's original port seems to be an easy option in TPROXY mode
>>>
>>> I did a simple test and found the kernel will emit EADDRINUSE when you
>>> bind on user's ip:port... So, a more complicated solution is needed.
>>> Keeping track of all the used ports per IP (both users, and already
>>> auto-selected by the software) and auto-select from the remaining...
>>>
>>> :(
>>
>> Or perhaps attempt binding to randomly selected IP:port combinations until you
>> don't get EADDRINUSE back?
>
> Yeah, a little bit dirty, but working solution.
>
> Just realised that keeping track of IP:port pairs in use at
> application level is useless, as there could be other software (or
> workers) running on the same machine and there's no practical way all
> to share all this information among them. Seems, the best place to
> keep track of all ip:port utilisation is in the kernel - the only
> piece of software that knows everything :(
>
> Niki
>

  *
  * English
  * Spanish
  * Hebrew
  * Russian

  * English
  * Spanish
  * Hebrew
  * Russian

 <javascript:void(0);>
Received on Mon Sep 09 2013 - 20:19:44 MDT

This archive was generated by hypermail 2.2.0 : Sat Sep 14 2013 - 12:00:04 MDT