> On Tue, 21 Apr 2009 12:54:53 +1200 (NZST), "Amos Jeffries"
> <squid3_at_treenet.co.nz> wrote:
> [cut]
>>> 2009/04/20 18:05:20| Store rebuilding is 822.48% complete
>
>> what type of file system is in use?
>
> ext3
>
>
>> with what settings?
>
> 4x 300GB SAS disks
>
> cache_swap_low 80
> cache_swap_high 85
>
> cache_dir aufs /var/cache/proxy/cache1 256000 256 256
> cache_dir aufs /var/cache/proxy/cache2 256000 256 256
> cache_dir aufs /var/cache/proxy/cache3 256000 256 256
> cache_dir aufs /var/cache/proxy/cache4 256000 256 256
Ah, part of the problem may be the above.
256000 Megabytes does not always equate to the free disk space (~300,000
Mibibytes). one of the rules of thumb is to leave 10% to 20% disk space
free for system use (directory tables and journals).
>
> maximum_object_size 256 MB
> minimum_object_size 1 KB
>
> cache_replacement_policy heap LFUDA
>
> Now (that's why does not work anymore):
>
> /dev/sdb1 276G 262G 0 100% /var/cache/proxy/cache1
> /dev/sdc1 276G 262G 0 100% /var/cache/proxy/cache2
> /dev/sdd1 276G 262G 0 100% /var/cache/proxy/cache3
> /dev/sde1 276G 262G 0 100% /var/cache/proxy/cache4
>
> Before crash was 20 or 35% free disk space on each disk, and was like that
> for 6 weeks (then reached the limits and was not growing anymore until the
> upgrade to STABLE14 crashed the box).
>
>
>> with what disk available?
>> on what operating system?
>
> BlueWhite64 (an unofficial 64 bits Slackware port).
>
Aha.
As a side issue: know who the maintainer is for slackware? I'm trying to
get in touch with them all.
>
>> is it rebuilding after saying DIRTY or CLEAN cache?
>
> CLEAN after upgrade, DIRTY after crashes.
>
>
>> does deleting the swap.state file(s) when squid is stopped fix things?
>
> I will try.
>
> The stranger think was store rebuild reporting > 100%.
Yes, we have seen a similar thing long ago in testing. I'm trying to
remember and research what came of those. At present I'm thinking maybe it
had something to do with 32-bit/64-bit changes in distro build vs what the
cache was built with.
Amos
Received on Tue Apr 21 2009 - 01:58:21 MDT
This archive was generated by hypermail 2.2.0 : Tue Apr 21 2009 - 12:00:02 MDT