Link to home
Start Free TrialLog in
Avatar of jmcnealy1
jmcnealy1

asked on

Grab the search results from a website using curl.

Hello everybody:

How would I use curl to grab the search results from a website like http://www.feed24.com/. I would like a command line script if possible. I want just the returned results (the text), not the html. Also, I would like to get each news item one at a time, not all as one html page. That way, I could write individual news items to a file or post them to a blog. Also is there a way to automatically follow the link to the next page of the search results to continue the download? Thanks.
Avatar of designbai
designbai

try using this to grab a page using curl

Before using curl, it must be installed.

$URL = "http://www.feed24.com/";
$string = `/usr/bin/curl $URL`; // note ' not single quote it is tilde ` previous key to 1
echo ($string);

$string variable contains the whole webpage.

You have to write a code parse the required news there after.

thanks.
ASKER CERTIFIED SOLUTION
Avatar of designbai
designbai

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
SOLUTION
Avatar of Marcus Bointon
Marcus Bointon
Flag of France image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
I do agree that HTML parser is always not trustable. Because if there is a change in the page, then we have to code it again.

But in the case of RSS, we do not need to worry. The structures are standard. We can play with it.
Avatar of jmcnealy1

ASKER

Thanks alot for the help. The point about just using rss as it is was a good one. I'll work on that in the future.