On 02/13/2014 05:37 PM, Rajiv Desai wrote:
> On Thu, Feb 13, 2014 at 3:02 PM, Alex Rousskov wrote:
>> On 02/13/2014 03:01 PM, Rajiv Desai wrote:
> I increased the slot size for a fresh cache with:
> cache_dir rock /mnt/squid-cache 204800 max-size=4194304 slot-size=32768
>
> How to I confirm that the slot size I have configured for is being used?
I believe cache manager mgr:storedir output contains enough information
to compute the slot size being used.
>>>>> cache_mem 1024 MB
>>>>>
>>>>> with 4 workers:
>>>>> workers 4
>>>>>
>>>>> I am running squid on a VM with 8 vCPUs(reserved cpu) and 8 GB RAM
>>>>> (reserved). It does not seem to be bottlenecked by cpu or memory
>>>>> looking at vmstat output.
>>>>> I get a throughput of ~38MB/sec when all objects are read from cache
>>>>> (with 64 outstanding parallel HTTPS reads at all times and avg object
>>>>> size of 80 KB).
>>>>
>>>> The first question you have to ask is: What is the bottleneck in your
>>>> environment? It sounds like you assume that it is the disk. Do you see
>>>> disk often utilized above 90%? If yes, you are probably right. See the
>>>> URL below for suggestions on how to measure and control disk utilization.
>>>
>>> The utilization is < 50% but unsure if that is because of async io.
>> If the utilization is always so low, the bottleneck may be elsewhere.
> When I perform a random read benchmarking test, iostat still shows
> idle percentage > 50%.
I would expect a reading close to 100% if the benchmarking test is meant
to determine true peak disk throughput [with random reads]. If you are
not getting close to 100% utilization, then the benchmarking test is
flawed, something else between the test and the disk subsystem
interferes, iostate misleads, and/or something else went wrong.
> Perhaps you are referring to a different utilization metric that I
> should be looking at?
I am referring to disk subsystem utilization: The percentage of the time
the disk subsystem was busy reading or writing (including seek,
rotational latency, and all other disk overheads) during a given period
of time. IIRC, iostat-x reported reasonable values for that in my tests,
but YMMV.
>> If you do have disk writes in your workload, please make sure you do not
>> look at average disk utilization over a long period of time (30 seconds
>> or more?). With writes, you have to avoid utilization peaks because they
>> block all processes, including Squid. If/when that happens, you can
>> actually see Squid workers in D state using top or similar. The
>> RockStore wiki page has more information about this complex stuff.
> Disk writes occur on cache misses. When there high number of misses,
> the wan bandwidth becomes a bottleneck with ~200 Mbps available
> bandwidth so I am not too concerned about that.
I am not sure you can rely on WAN bottleneck to nicely spread out your
disk writes. Squid will accumulate write data before sending it to disk.
That accumulation can create bursts of write requests that may
debilitate untuned Squid. Moreover, even a relatively low percentage of
writes may have significant effect on low-level disk caching and seek
optimizations (outside of Squid).
There is nothing wrong with focusing on reads first, especially if you
want to understand how things really work. However, ignoring writes
throughout the entire optimization process is rather dangerous IMHO.
Also, it sounds like you have a rather specialized use case (unusual
response sizes; a forward proxy with a well-controlled response dataset;
unstable hit ratio, etc.). It is possible that you need specialized
tuning to optimize that kind of environment. Most of my comments here
and on the wiki should be general enough to apply, but you may need more
than that general advice to get this working well.
Cheers,
Alex.
Received on Fri Feb 14 2014 - 01:25:18 MST
This archive was generated by hypermail 2.2.0 : Sat Feb 15 2014 - 12:00:05 MST