Solved

Squid forward cache proxy very slow on some PDF

Posted on 2013-01-25
6
971 Views
Last Modified: 2013-02-04
Hi

We have a problem that on some pdfs that go through our squid cache proxy takes very long. > 2 min for ~110kb. As soon as I go directly it works perfectly.

Not all PDF take that long.

I tcpdumped the process and saw that there is a 2 minute gap when it arrives to the proxy

tcpdump
When I checked the proxy log I could see that it took 120114 milliseconds for squid to cache it


1359106030.833 120114 160.85.85.46 TCP_MISS/200 116194 GET http://www2.zhlex.zh.ch/appl/zhlex_r.nsf/0/9429732E0BEDB5EDC12574C60044A4CC/$file/xxxx.pdf - DIRECT/195.65.218.66 application/pdf

Open in new window


Why does squid take that long? It is an akward url with some variable in it. Could this be the reason?

We are running squid 3.1 but the problem exists also on 3.2

The config. This config has been ported from old squids and have not been adjusted ever since.

acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl bigip src xx.xx.xx.xx/32
acl to_localhost dst 127.0.0.0/8
acl monhost   src xx.xx.xx.xx/32
acl srv-ts-057   src xx.xx.xx.xx/32
acl srv-ts-058   src xx.xx.xx.xx/32
acl snmppublic snmp_community Fast3thernet
acl xxnet src xx.xx.xx.xx/16       # xx
acl xxnet src xx.xx.xx.xx/32   # HSWNAT
acl xxnet src xx.xx.xx.xx/16           # VoIP
acl xxnet src xx.xx.xx.xx/22       # HAP
acl xxnet src xx.xx.xx.xx/22      # HSSAZ
acl xxnet src xx.xx.xx.xx/24       # Management Netz 1
acl xxnet src xx.xx.xx.xx/24       # Management Netz 2
acl xxnet src xx.xx.xx.xx/24      # FET-DEV
acl xxnet src xx.xx.xx.xx/24      # FET-TEST
acl xxnet src xx.xx.xx.xx/24      # BET-DEV
acl xxnet src xx.xx.xx.xx/24      # BET-TEST
acl xxnet src xx.xx.xx.xx/24      # FET-VDP
acl xxnet src xx.xx.xx.xx/24      # FET-VDP
acl STAFFMGR src xx.xx.xx.xx/26
acl SSL_ports port 443 8443 28443 50001
acl Safe_ports port 80          # http
acl Safe_ports port 21          # ftp
acl Safe_ports port 443         # https
acl Safe_ports port 70          # gopher
acl Safe_ports port 210         # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280         # http-mgmt
acl Safe_ports port 488         # gss-http
acl Safe_ports port 591         # filemaker
acl Safe_ports port 777         # multiling http
acl CONNECT method CONNECT
acl MONxxCH dstdomain mon.xx.ch
acl ZREG dstdomain zreg.xx.ch
acl PUT method PUT
http_access allow PUT xxnet
http_access deny PUT
acl PURGE method PURGE
http_access allow PURGE localhost
http_access deny PURGE
acl PROPFIND method PROPFIND
http_access allow PROPFIND srv-ts-057
http_access allow PROPFIND srv-ts-058
http_access deny PROPFIND
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny to_localhost
http_access deny !STAFFMGR MONxxCH
http_access deny !STAFFMGR ZREG
http_access deny SCHEISSMS
http_access allow xxnet
http_access deny all
icp_access deny all
follow_x_forwarded_for allow localhost
follow_x_forwarded_for allow bigip
acl_uses_indirect_client on
delay_pool_uses_indirect_client on
log_uses_indirect_client on
http_port 160.85.104.11:8080
hierarchy_stoplist cgi-bin ?
cache_mem 768 MB
maximum_object_size_in_memory 32 KB
cache_dir ufs /var/cache/squid 25000 64 256
coredump_dir /var/cache/squid
#access_log /var/log/squid/access.log
#cache_log /var/log/squid/cache.log
cache_store_log none
#pid_filename /var/run/squid.pid
ftp_user wwwuser@xx.ch
refresh_pattern ^ftp:           1440    20%     10080
refresh_pattern ^gopher:        1440    0%      1440
refresh_pattern -i (/cgi-bin/|\?) 0     0%      0
refresh_pattern .               0       20%     4320
acl shoutcast rep_header X-HTTP09-First-Line ^ICY.[0-9]
acl apache rep_header Server ^Apache
request_timeout 30 seconds
cache_mgr servicedesk@xx.ch
#mail_from squid@srv-app-901.xx.ch
#mail_program /usr/local/bin/mutt
cache_effective_user squid
cache_effective_group squid
httpd_suppress_version_string on
visible_hostname srv-app-901.xx.ch
unique_hostname srv-app-901.xx.ch
snmp_port 3401
snmp_access allow snmppublic monhost
snmp_access deny all
snmp_incoming_address xx.xx.xx.xx
snmp_outgoing_address 255.255.255.255
icp_port 0
allow_underscore off
dns_retransmit_interval 3 seconds
dns_timeout 1 minute
dns_nameservers xx.xx.xx.xx
append_domain .xx.ch
max_filedescriptors 8192

Open in new window

0
Comment
Question by:Chris Sandrini
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 4
  • 2
6 Comments
 
LVL 57

Expert Comment

by:giltjr
ID: 38820942
I am assuming that 160.85.104.70 is the squid box.

You need to look at the "back" side of the squid box to see how long it takes for Squid to get the document from the source server.  

Although the log shows it took 120 seconds, you don't know how much of that time was the back end server and how much was Squid.
0
 
LVL 11

Author Comment

by:Chris Sandrini
ID: 38825982
Hi

How can I check how long it takes for squid to grab that file?
0
 
LVL 57

Expert Comment

by:giltjr
ID: 38826334
You can run  a packet capture on the Squid box capturing traffic between Squid and the server where the PDF file originates from.
0
Ransomware: The New Cyber Threat & How to Stop It

This infographic explains ransomware, type of malware that blocks access to your files or your systems and holds them hostage until a ransom is paid. It also examines the different types of ransomware and explains what you can do to thwart this sinister online threat.  

 
LVL 11

Author Comment

by:Chris Sandrini
ID: 38834048
I am assuming that 160.85.104.70 is the squid box.

No. It is actually an f5 appliance that is loadbalancing the requests to 3 squid boxes.

I have captured the time it takes squid to fetch the pdf and it takes less than a second to do so.

Also I have fetched further network traffic on the squid box

on Squidbox
And there is the 2 minute gap again. 160.85.104.13 is the squidbox and the other ip is a gw proxy.

This shows that squid is taking 2 minutes to handle the request and then passing it back to the gw where it is passed back to my client.
0
 
LVL 11

Accepted Solution

by:
Chris Sandrini earned 0 total points
ID: 38835048
Problem solved!

I found out that the command "host www2.zhlex.zh.ch" will end in a timeout. Squid is first looking for an AAAA record (ipv6) but we are not using ipv6. This takes the 2 minutes before it timed out and looked for an A record.

I have disabled ipv6 on the system + I have added the following lines to squid.conf to force ipv4

acl to_ipv6 dst ipv6
tcp_outgoing_address <your_proxy_ipv4_address> !to_ipv6

Open in new window


Now everything works!
0
 
LVL 11

Author Closing Comment

by:Chris Sandrini
ID: 38850417
Found the solution myself
0

Featured Post

Get MySQL database support online, now!

At Percona’s web store you can order your MySQL database support needs in minutes. No hassles, no fuss, just pick and click. Pay online with a credit card.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Network Interface Card (NIC) bonding, also known as link aggregation, NIC teaming and trunking, is an important concept to understand and implement in any environment where high availability is of concern. Using this feature, a server administrator …
In the first part of this tutorial we will cover the prerequisites for installing SQL Server vNext on Linux.
Get a first impression of how PRTG looks and learn how it works.   This video is a short introduction to PRTG, as an initial overview or as a quick start for new PRTG users.
This demo shows you how to set up the containerized NetScaler CPX with NetScaler Management and Analytics System in a non-routable Mesos/Marathon environment for use with Micro-Services applications.
Suggested Courses

617 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question