• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 1232
  • Last Modified:

squid memory managment

dear EE,
i have install Squid 2.6 stable 12 on open suse 10.2 (64bit), but i face a problem in memory utilization.
where my memory are 4G and i deploy the Squid server on Hp porlent 380.
all the time i find my memory almost full, and this lead to delay in my cache response.

how can I make the memory management more better.

thank you in advance
0
anas_aliraqi
Asked:
anas_aliraqi
  • 6
  • 4
1 Solution
 
NopiusCommented:
> all the time i find my memory almost full, and this lead to delay in my cache response.

Why do you think that memory is almost full, what commands do you use and what output do you see?

How long is delay, did you measure it, are you sure that delays come from slow cache?
0
 
anas_aliraqiAuthor Commented:
dear,
thanx for quick answer,
i have use the following command, ./squid -sYD
and i see my memory full by using top command for example i find just 20M of ram are free from 4G.
one more thing i used snmp monitoring application and i find the following:
- 20-40M are free.
- 1G are used by memory buffers.
- and the other space i didnt know where they going, (i install just squid and DNS on this server).

and how i can know it is delayed, when for example view some pages in the peek it take long time, also when i use yahoo messenger my message take long time to reach its destination or the messages from other person enter a queue and open on my face as one burst.

thank you in advance
0
 
NopiusCommented:
> 20-40M are free.

That's OK for Linux. It uses all available memory. If you have no swap activity, memory certainly is not your problem (top command also shows swap usage).

> when for example view some pages in the peek it take long time, also when i use yahoo messenger

That may be a problem of yahoo server, that is heavy loaded and responds slowly. No cache is used when squid detects dynamic content (in case of messenger, it's dymamic). As you can guess, only cached pages are taken from cache. You may see or analyze your cache activity, just edit your squid.conf and uncomment "cache_log /usr/local/squid/logs/cache.log" or whatever you have and add "debug_options ALL,1" to see more details. Also check if you have some thing like "access_log /usr/local/squid/logs/access.log squid".

Quick check if access.log may show does squid uses cache or direct connection:
TCP_MISS means that cache is not used (either dynamic content or page is asked first time):
1186314466.206   6307 172.16.1.118 TCP_MISS/200 388 POST http://mail.google.com/mail/channel/bind? - DIRECT/66.249.91.19 text/html

If you see something with _HIT:
1186309463.287     44 172.16.1.118 TCP_IMS_HIT/304 357 GET http://css.yandex.net/css/optim.css - NONE/- text/css

that means that page is taken from cache. If you see too low such entries, your squid is not so efficient as you might expect. Also there are access.log analyzers like squizer (http://strony.wp.pl/wp/maciej_kozinski/squeezer.html). "by Maciej Kozinski, gathers information abount Squid's internal performance and eficiency of relationship, finds bottlenecks, shows data transfer speed from particular sources.by Maciej Kozinski, gathers information abount Squid's internal performance and eficiency of relationship, finds bottlenecks, shows data transfer speed from particular sources." Also there is a Squizer2 (http://www.rraz.net/squeezer2/) which is like squizer, but with more features.
0
VIDEO: THE CONCERTO CLOUD FOR HEALTHCARE

Modern healthcare requires a modern cloud. View this brief video to understand how the Concerto Cloud for Healthcare can help your organization.

 
NopiusCommented:
Also this link is a 'meta' link to all squid analyzers: http://www.squid-cache.org/Scripts/
0
 
anas_aliraqiAuthor Commented:
thanks for your reply.

actually i know about these log files and hit and miss ratio also about the given analyzers, but i am afraid that the problem of queued message in yahoo messenger are not related to yahoo server load coz i actually check from a pc connected to real public IP and it is doing fine at the same time.

i have configure file descriptor of the Linux to increase the amount of concurrent files and sessions. it is tunning my performance but i still have the same problem.

thank you for every thing


0
 
NopiusCommented:
> actually i know about these log files and hit and miss ratio also about the given analyzers

So what do you see in your logs when user sees delays, please provide related log entries?

> i have configure file descriptor of the Linux to increase the amount of concurrent files and sessions. it is tunning my performance but i still have the same problem.

Is your server heavy loaded (what is LA)?
Is your Internet link the same as link of that user, that have everything OK?
If internet link is different, is how much it's loaded; if the same, how it is shared?
How much concurrent connections do you have (netstat -an | grep EST | wc -l)?
Please provide your swap usage statistics (swapon -s).

Also it would be nice if you save network captures on internal/external interfaces of squid and on client's machine directly connected to the internet when accessing problematic page. If your machine is not heavy loaded, you may have some sort of network problem (interface errors, MTU mistmatch, closed ICMP etc.)

Also I may say that I never dealt with 64-bit squid on Linux (but not on Solaris), it may have bugs that I've never seen before.

On my test SuSE 10.1 (without swap) with squid absolutely idle I have:

# top -p `pgrep -d, squid`
Cpu(s):  0.1% us,  0.1% sy,  0.0% ni, 99.0% id,  0.8% wa,  0.0% hi,  0.0% si
Mem:    262144k total,    45084k used,   217060k free,        0k buffers
Swap:        0k total,        0k used,        0k free,        0k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
21875 root      15   0  5748  308  304 S    0  0.1   0:00.00 squid
21877 squid     16   0 10996 4264  960 S    0  1.6   0:23.72 squid


So in your case, I agree, the memory usage may be an issue. Everything depends on system load.

0
 
NopiusCommented:
http://wiki.squid-cache.org/SquidFaq/SquidMemory - squid memory optimization FAQ
0
 
anas_aliraqiAuthor Commented:
Dear,
thanks for your response,

I have trace the system there is no thing strange in log files (access.log, cach.log)

And about internet connection it is good I have VSAT with 12M download and 2M Upload, and it is stable, when I ping any site in the internet I get no request time out, put with Time=600ms as average.

  The concurrent connection (netstat -an | grep EST | wc l) is equal 133, that mean is normal and not heavy loaded.

The swap usage is(swapon -s):
Filename =/dev/cciss/c0d0p2
Type= partition
Size=8393952
Used=104
Priority= -1

# when run ( top -p `pgrep -d, squid`)

top - 13:36:34 up 14 days, 21:13,  1 user,  load average: 0.78, 1.36, 1.68
Tasks:   2 total,   0 running,   2 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us,  0.1%sy,  0.0%ni, 79.1%id,  1.4%wa,  0.0%hi, 19.4%si,  0.0%st
Mem:   3988156k total,  3963156k used,    25000k free,  1353712k buffers
Swap:  8393952k total,      104k used,  8393848k free,   531084k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 3650 nobody    15   0  338m 317m 1536 S    1  8.2 555:54.92 squid
 3648 root      19   0 14880  696  380 S    0  0.0   0:00.00 squid

and it will be nice from you if you tell me how i can get network captures on internal/external interfaces
0
 
NopiusCommented:
Sorry for delay, I was on business trip.

Your CPU time is not good, your system is loaded.

When CPU goes behind 1 (you have 1,68), your system becomes slow down. Squid consumes less then 400Mb of 4Gb, so the memory is not your problem.
You may increase the cache size

Probably most time is spent by disk drivers on cache retrieval/cache writes from/to disk. That also may be network or some other drivers (you have 20% system time, so most time your system spends in driver code).

Check one-by-one recmmendations  from here (I mean how is your squid compiled):
http://wiki.squid-cache.org/SquidFaq/SquidMemory#head-45526678eacf95bd32eea32669fe7e9d2e1e2498

You may try to _increase_ cache_mem parameter and setup it to 2Gb.

"As a rule of thumb on Squid uses approximately 10 MB of RAM per GB of the total of all cache_dirs (more on 64 bit servers such as Alpha), plus your cache_mem setting and about an additional 10-20MB. It is recommended to have at least twice this amount of physical RAM available on your Squid server."

Check how much your cache_dirs: http://wiki.squid-cache.org/SquidFaq/ConfiguringSquid#head-a2d396c0ef66603362ae3790cf89752c8dcf463b

With 2Gb of memory on 64bit platform you may serve 100Gb of cache_dirs, do you have this amount of cache dirs? That doesn't mean, however, that your squid will be fast on that amount of data. It depends on your disk type (IDE/FC/SCSI/SATA and raid level) and probably disk speed is your real bottleneck.

0
 
anas_aliraqiAuthor Commented:
Dear Nopius,
thanks,
now my system read this.

Tasks:   2 total,   0 running,   2 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.2%us,  0.4%sy,  0.0%ni, 98.9%id,  0.2%wa,  0.1%hi,  0.2%si,  0.0%st
Mem:   3988156k total,  3961928k used,    26228k free,    78812k buffers
Swap:  8393952k total,      104k used,  8393848k free,   317008k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 3724 nobody    15   0 2874m 2.8g 1508 S    2 73.2 236:42.63 squid
 3721 root      18   0 14880  696  380 S    0  0.0   0:00.00 squid
0

Featured Post

Granular recovery for Microsoft Exchange

With Veeam Explorer for Microsoft Exchange you can choose the Exchange Servers and restore points you’re interested in, and Veeam Explorer will present the contents of those mailbox stores for browsing, searching and exporting.

  • 6
  • 4
Tackle projects and never again get stuck behind a technical roadblock.
Join Now