[Okta Webinar] Learn how to a build a cloud-first strategyRegister Now

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 186
  • Last Modified:

Speed-Up Socket Request ?

I have been playing with sockets and have found that
requesting sockets takes much more time than actually getting the data. On average it takes about 3-6 seconds
to request 12 web pages to be sent but less than 1 second to actually receive them all. What am I doing wrong?
(Our instructor won't let us use LWP so please don't waste
my time and yours by mentioning it. Thanks!)
Below is the request socket routine:



#!\perl\bin\perl.exe -w
# Shabang line for Windows (Win98 and Activestate Perl)

use IO::Socket;
use IO::Select;

my @servers = qw(www.yahoo.com www.palm.com www.compaq.com www.google.com www.microsoft.com www.tacobell.com www.ibm.com www.hp.com www.dell.com www.sun.com www.gateway.com www.cnn.com www.walmart.com);

my $port = 80;
my $TimeOut = 10;
my $len=0;

my $crlf = "\015\012";
$EOL = "\015\012";
$BLANK = $EOL x 2;


$sel = IO::Select->new;              

# ===================================== Send Requests
$startsend=time;
foreach $server (@servers)
{
      $sock = IO::Socket::INET->new(
      PeerAddr => $server,
      Proto => 'tcp',
      Type  => SOCK_STREAM,
      PeerPort => 'http(80)');
         
    if ($sock)
    {        
    $sel->add($sock);                            
    $sock2host{$sock} = "$server";  
    $sockrequest =  "GET / HTTP/1.0".$crlf.
       "Host: $server:$port" . $crlf.$crlf;
   
    $len = length($sockrequest);
    syswrite($sock, $sockrequest,$len);  
    }
   
    # $remote->autoflush(1);
    # print $remote "GET / HTTP/1.0".$BLANK;

       
    # print $sock $sockrequest;
    # $sock->flush();                                  
}

# ======= Show Request Time
$endsend=time;
$timetosend=$endsend-$startsend;
print "Time To Send Requests = ".$timetosend."\n";

0
jgore
Asked:
jgore
  • 4
  • 4
1 Solution
 
maneshrCommented:
jgore,

"..What am I doing  wrong?..."

What is the benchmark that you are measuring this 3-6 seconds time period against? What is your expectation of this time period?

Do you have a number in mind? If yes, than can you please explain what is the basis for this number?

As i see it, 3-6 seconds is a very reasonable period for these high traffic sites.

Besides there is always the underlying network which could be causing the delay.
Have you factored that into your measurement?

Please let me know.
0
 
jgoreAuthor Commented:
maneshr:

If I can download all the actual data (web pages), of many hundreds of K bytes, in less than a second why should the
requests take so much longer when they are only 40 characters each?
The web page data is much much larger than the little requests. I would assume they could be sent faster!
Seems counter intuitive to think otherwise?

I think its blocking. Each request is waiting for
an acknowledgement or something before the next is sent. I would like to send them all at one time or at least rapid fire.

I really hate to use Fork because its not very standard.
Some versions do it, some don't, and its buggy.

There must be a way to make the request much faster.
So little bytes being sent, so little code being executed,
but it is taking so much time!

I know your pretty knowledgable so I await any enlightenment you can give me...
0
 
maneshrCommented:
jgore,

"..If I can download all the actual data (web pages), of many hundreds of K bytes, in less than a second .."

What is your basis of this statement? Are you referring to the way you get web pages from a browser?

Please clarify.

"... I would assume they could be sent faster!."

No matter how fast you send the request, the rendering of the page is only as fast as the time taken for the page to be served by the responding server and the network to deliver that page to your client.

Let me know if this is contrary to your understanding.

"..Each request is waiting for an acknowledgement or something before the next is sent..."

Is this based on some concrete deduction or just a swag based on the time delay?

".. really hate to use Fork because its not very standard. Some versions do it, some don't, and its buggy...."

Umm....i am not clear what you mean by versions. Are you referring to the fact that fork is support on UN*X systems?

"..So little bytes being sent, so little code being executed,
                     but it is taking so much time!.."

Again, i think that based on the sites you are calling and the time you are reporting, that everything is ok and in line. But then that is my personal opinion based on work i have done with Perl.

HTH.
0
Technology Partners: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

 
jgoreAuthor Commented:
maneshr:

>"..If I can download all the actual data (web pages),
> of many hundreds of K bytes, in less than a second
> .."
>What is your basis of this statement? Are you referring to
>the way you get web pages from a browser?
>Please clarify.

I'm refering to the Perl routine I use after the one shown.
It gets all the web data in less than a second!
The above is just the Request portion of the program.


>"... I would assume they could be sent faster!."
>No matter how fast you send the request, the rendering
>of the page is only as fast as the time taken
>for the page to be served by the responding server and
>the network to deliver that page to your client.

The web page data comes across very fast, that is why I
think the Requests should be faster.

>Let me know if this is contrary to your understanding.


1. Make new Sock (connect to server).
2. Request Web page from server.
3. Receive actual web page data from server.
4. Do something with data received.

It seems 1 or 2 above is taking 5 seconds to execute
(when requesting about 10 pages).
Where as, 3 and 4 are almost instant, about 1 second or less.
It would seem to me that making a sock and sending a
request should be much faster than receiveing a megabyte
of data from different servers in 3.

Of course, it could be that 1 and 2 are taking so long
that by the time I get to 3 and 4 above there is a lot of
data waiting to be read...so 3 and 4 just seem to be very
fast. Either way, 1 and 2 are slow.


>"..Each request is waiting for an acknowledgement or
>something before the next is sent..."
>Is this based on some concrete deduction or just a swag
>based on the time delay?

Just a "Swag" as you say. If I can get the web page data very fast why can't I send the requests faster?


>.. really hate to use Fork because its not very standard.
>some versions do it, some don't, and its
>buggy...."
>Umm....i am not clear what you mean by versions. Are you >referring to the fact that fork is support
>on UN*X systems?

Yes. And even then it can have problems. I would rather
stay away from Fork.


>"..So little bytes being sent, so little code being
>executed,but it is taking so much time!.."
>Again, i think that based on the sites you are calling
>and the time you are reporting, that everything
>is ok and in line. But then that is my personal opinion
>based on work i have done with Perl.


You do understand that the above code just makes a
request for data to be sent. It doesn't actually download
any data from them (that is a second part of the routine
which I didn't post).
Just connecting to 10 servers and sending out 10 requests
takes 5 seconds. Seems wrong to me. I bet its creating
a new sock (connecting to each server) that is taking so
long.

I will use a HiRes Timer in the morning and report my
results. I will time everything. That will show me right
or wrong.

Thanks for being patient with me ;-)
0
 
maneshrCommented:
jgore,

"...The web page data comes across very fast, that is why think the Requests should be faster..."

Based on your persistence and confidence, i took the script and ran it on my server & saw the same 4 secs response time.

I then disabled output buffering by adding the following line....

$|++;

AFTER...

#!\perl\bin\perl.exe -w


...and.....well, add that line & see for yourself..............

Luck!!
0
 
jgoreAuthor Commented:
The  $|++;   or $|=1;  doesn't seem to help me.
But I'll leave it in, just in case it helps others.

I'm still getting poor Create Sock times. It's not the
request routine as I first thought. Its the part of the
routine that creates a new sock and connects to remote
server.  Receiving the data is pretty fast considering how
much data is being transfered.

SockTime = 3.45999991893768
SendRequest = 0
ReceiveTime = 1.71000003814697

SockTime = 3.89999997615814
SendRequest = 0.0500000715255737
ReceiveTime = 0.819999933242798

SockTime = 2.51999986171722
SendRequest = 0
ReceiveTime = 1.49000000953674

SockTime = Time it takes to create a new sock and connect to remote server.
SendRequest = Time it takes to send request to remote server.
ReceiveTime = Time it takes to actually receive all the web page data from remote servers.


Hmmm....perhaps I'll try some lower level stuff.
0
 
jgoreAuthor Commented:

I'm giving you the points cause I think that was the
only thing I hadn't added that I should have.
Are you using Unix? Was just wondering if that is why
it speeded up your connect times...perhaps that
$|++;  (or $|=1; ) helps Unix more than windows.

I looked everywhere for an answer to my connect delay
and basically found that Windows version of Perl doesn't
support some things the Unix Perl does.

No matter, my script will be running on both Windows and
Unix platforms. And anything I can do to make it faster is
good!
Thanks for the help!




0
 
maneshrCommented:
jgore,

"..Are you using Unix?..."

Yes. That is correct.

I am happy that the line of code i suggested work on UN*X systems at the very least.
0

Featured Post

Free Tool: Port Scanner

Check which ports are open to the outside world. Helps make sure that your firewall rules are working as intended.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

  • 4
  • 4
Tackle projects and never again get stuck behind a technical roadblock.
Join Now