[Okta Webinar] Learn how to a build a cloud-first strategyRegister Now

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 527
  • Last Modified:

run a batch script to visit a website every hour

just one url
once an hour

windows server 2008
0
rgb192
Asked:
rgb192
  • 5
  • 4
  • 2
  • +1
1 Solution
 
leakim971PluritechnicianCommented:
rgb192,

You may use wget for windows : http://gnuwin32.sourceforge.net/packages/wget.htm

Regards.
0
 
Ray PaseurCommented:
Please explain a little more - what URL?  Is it on your server or another?  Which server has Windows?
0
 
biztigerCommented:
Use curl to do this.
Download curl:-
http://www.gknw.net/mirror/curl/win32/curl-7.19.7-ssl-sspi-zlib-static-bin-w32.zip



Now, create a batch file:-

C:\Path_to_curl\curl.exe http://www.google.com/
 
 Now use task schedular to run the batch file every hour
 
 That's it.
0
Independent Software Vendors: We Want Your Opinion

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

 
rgb192Author Commented:
using windows

url on my server or another server
0
 
Ray PaseurCommented:
Do you want to do this in PHP?
0
 
rgb192Author Commented:
php would make it easier because i know php
0
 
leakim971PluritechnicianCommented:
WGET is one of the most used command on linux to download page and files.
http://gnuwin32.sourceforge.net/packages/wget.htm

In a batch, you just need to put :

wget http://www.yousiteaddress.com/thepage.php
0
 
Ray PaseurCommented:
OK, I know Linux, and not much about Windows, but I think you are on the right path with the task scheduler.

Here is a script that will do the /GET interaction with a URL.  It can have arguments, etc.  It will retrieve any browser output that the site produces (HTML).  If you use this with the task scheduler, it will be almost like typing the url into your browser.

best regards, ~Ray
<?php // RAY_curl_example.php
error_reporting(E_ALL);

function my_curl($url)
{
    $curl = curl_init();

// HEADERS FROM FIREFOX - APPEARS TO BE A BROWSER REFERRED BY GOOGLE

    $header[] = "Accept: text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5";
    $header[] = "Cache-Control: max-age=0";
    $header[] = "Connection: keep-alive";
    $header[] = "Keep-Alive: 300";
    $header[] = "Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7";
    $header[] = "Accept-Language: en-us,en;q=0.5";
    $header[] = "Pragma: "; // browsers keep this blank.

    curl_setopt($curl, CURLOPT_URL, $url);
    curl_setopt($curl, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.15) Gecko/20080623 Firefox/2.0.0.15');
    curl_setopt($curl, CURLOPT_HTTPHEADER, $header);
    curl_setopt($curl, CURLOPT_REFERER, 'http://www.google.com');
    curl_setopt($curl, CURLOPT_ENCODING, 'gzip,deflate');
    curl_setopt($curl, CURLOPT_AUTOREFERER, true);
    curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
    curl_setopt($curl, CURLOPT_TIMEOUT, 3); // GIVE UP AFTER THREE SECONDS

    if (!$html = curl_exec($curl))
    {
        return FALSE;
    }

    curl_close($curl);
    return $html;
}

// USAGE EXAMPLE
$url = 'http://twitter.com';
$htm = my_curl($url);
if (!$htm) die("NO $url");

echo "<pre>";
echo htmlentities($htm);

Open in new window

0
 
rgb192Author Commented:
only loops once


could i put a
while (no key pressed)

and a sleep 36000

and have this loop all day

would php time out


<?php // RAY_curl_example.php 
error_reporting(E_ALL); 



// USAGE EXAMPLE 
$url = 'http://twitter.com'; 
$htm = my_curl($url); 
//if (!$htm) die("NO $url"); 
 
echo "<pre>"; 
echo htmlentities($htm);
 
function my_curl($url) 
{ 
    $curl = curl_init(); 
 
// HEADERS FROM FIREFOX - APPEARS TO BE A BROWSER REFERRED BY GOOGLE 
 
    $header[] = "Accept: text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5"; 
    $header[] = "Cache-Control: max-age=0"; 
    $header[] = "Connection: keep-alive"; 
    $header[] = "Keep-Alive: 300"; 
    $header[] = "Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7"; 
    $header[] = "Accept-Language: en-us,en;q=0.5"; 
    $header[] = "Pragma: "; // browsers keep this blank. 
 
    curl_setopt($curl, CURLOPT_URL, $url); 
    curl_setopt($curl, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.15) Gecko/20080623 Firefox/2.0.0.15'); 
    curl_setopt($curl, CURLOPT_HTTPHEADER, $header); 
    curl_setopt($curl, CURLOPT_REFERER, 'http://www.google.com'); 
    curl_setopt($curl, CURLOPT_ENCODING, 'gzip,deflate'); 
    curl_setopt($curl, CURLOPT_AUTOREFERER, true); 
    curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1); 
    curl_setopt($curl, CURLOPT_TIMEOUT, 3); // GIVE UP AFTER THREE SECONDS 
 
    if (!$html = curl_exec($curl)) 
    { 
        return FALSE; 
    } 
 
    curl_close($curl); 
    return $html; 
} 
 
?>

Open in new window

0
 
Ray PaseurCommented:
"only loops once"  - Uhh, right.  You have to use it more than once to get more than one results set.

You have two separate question here.  One question is "how do I access the contents of a URL with GET method"  The other question is "how do I do this over and over".

The script I posted above teaches the answer to the first question.

The task scheduler is the answer to the second question. You can learn more about it here:
http://lmgtfy.com?q=windows+task+scheduler
0
 
rgb192Author Commented:
works
0
 
Ray PaseurCommented:
Great!  Thanks for the points, ~Ray
0

Featured Post

Industry Leaders: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

  • 5
  • 4
  • 2
  • +1
Tackle projects and never again get stuck behind a technical roadblock.
Join Now