Simple find

Posted on 2004-03-31
Last Modified: 2008-03-17
A few years back there use to be this application called "Simple find". It was a nice Windows based app that could querry search engines by subject. It would then give you a list of results.

I have been trying to duplicate this feature in my own application and having some difficulty.

In my application the user can request a file type by extention.

I started out using Wininnit.dll with a VC++6.0 example called Tear that could download an html page.

My thinking was to download the page then parse it for either the location of files or other html page links. Then recursivley parse those pages and go on and on until the user depth -level had been reached or there where no more links found.

What I found was that not all the links are so straight forward. Some of them are full paths ="" while others where relative "apple.jpg". Besides that there seem to be many more page types than just htm or html.

So I went looking to see if someone had already conquered a parsing mechanism.

Then I was directed to WGet.

WGet is a great tool. But it still does not quite do what I want it to do. I use WGet with ShellExecute and just pass parms to the Wget.exe from my program.

This only sorta works. It seems that WGet has just as much problem with parsing as I precieve it to be a pain.

Not only this but I wanted to have the ability to querry search engines. As things stand now users have to enter a starting web page address.

I also notice that I can even look at some of the pages where WGet missed files and I see absolute and relative path files. ???

Then one day I was chatting and someone suggested that I have a common server and a php script. The users would go to the one site (not too sure about how that would perform) and each of there applications would querry that site and the php script would return them results that it obtained from the search engines.

The persons thought being that you can querry search engines but you have to be carefull because from time to time they change their format.
But it sounded as he was only guessing and had never done the scripting himself.

I know that by studying the address bar when I do searches from some of the less popular search engines that I could adjust the variable to change search content and page starting. That seemed hopefull.

But then I noticed that Google and perhaps Yahoo had some kind of restriction because I would get "page forbidden or "no access" (cant exactly remember) but I was not allowed access. Somehow it could detect that it was not an original querry but a machine generated one.
I notice that some of the less popular ones did not do this.

So I am here fishing for guidence.


Question by:RJSoft
  • 7
  • 4
  • 3
  • +1
LVL 30

Expert Comment

ID: 10722945
So what exactly is your question?

Author Comment

ID: 10722982
Hell Axter.

A few questions.

First if anyone knows of a good parsing function or example.

I guess one that could start from the first page results of a search engine querry and go deep enough to get to file locations. ( Build relative paths also)

And perhaps a method, function ?? to querry search engines. Maybe avoiding php because I dont know it. Or should I invest time to learn so I can get this functionality?

LVL 30

Expert Comment

ID: 10722989
I created a program some what similar to this before.

The file extension should not matter.  Your program should just verify that the file has the HTML tags to validate that it is an html page.
You can add code to skip over common file types like (gif, png, jpg, etc..).
The logic for creating a full path from a relative path is not that complicated either.
Just check if the path has "//" set.  If it doesn't have this set, then assume it's a relative path, and prefix the current path to the target path.

In my program, I use the search engines as a starting point.
I had an option page in which the user could change the format for the search engine, or add other search engines with associated format.

One approach you can use is to have a fix site that stores the format, and have you're program look for this site every time it starts up.
That way if the format changes, all you have to do is update the one site, and that will update all the users.
Master Your Team's Linux and Cloud Stack!

The average business loses $13.5M per year to ineffective training (per 1,000 employees). Keep ahead of the competition and combine in-person quality with online cost and flexibility by training with Linux Academy.

LVL 12

Assisted Solution

stefan73 earned 50 total points
ID: 10723157
Hi RJSoft,
Probably the easiest way to automatically query search engines is the WWW::Search classes of Perl:

This small sample prog shows the power of it:

    require WWW::Search;
    my $sQuery = 'Columbus Ohio sushi restaurant';
    my $oSearch = new WWW::Search('AltaVista');
    $oSearch->login($sUser, $sPassword);
    while (my $oResult = $oSearch->next_result())
      print $oResult->url, "\n";
      } # while

If you don't know Perl yet, now's the perfect moment to learn it ;-)


Author Comment

ID: 10723226
That's pretty much what I had gathered before. But I have become a bit spoiled by trying to get WGet to do all the work by simply "Shellexecute" with parms.

I kinda dred going back to Wininit.dll and creating my own parsing. Also not knowing if I could find some or any reliable search engines that would not simpy change there format.

>>In my program, I use the search engines as a starting point.
I had an option page in which the user could change the format for the search engine, or add other search engines with associated format.

Sounds great! But is that a bit much for some users? Maybe I do not fully understand how much configuring the user is doing. As far as I could understand the configuring would involve changing variable names on search engine querries.


address bar on querry shows (just example dont remember exactly);page=1;

So I coud have a dialog with user input. Apples.;page=1;

Now if dogpile suddenly changed it's format to;find=Apples;

Then how would you have a user interface to suggest changing variable names and adjusting location?

I know I am probably way off base here. Maybe this does not matter.

BTW, the only way I could figure to querry the search engines was to manipulate the varible found in the title bar. Also I found that I could reduce to a working querry by only using subject and page number. Page number was important to me because I wanted to pull in a good size listing of web addresses that had to do with the subject.

I would then use Wininit.dll to download the pages.

So what happened to the code? Do you still have users that use it?

Would you mind sending an example of it? Or is that asking a bit too much?

Thanks in advance

Author Comment

ID: 10723261

Yes. your correct. I would love to learn Perl.

But I am a bit fuzzy on what I would be doing. Do I have to re-write my whole application in perl or could I call up a perl script using my Windows application? If so how?

Also does it not have to exist upon a server? Then I also have the problem of multiple users accessing the same site? Or is this really a non issue?

Tell me in your example I see that it also ask for password.

$oSearch->login($sUser, $sPassword);
Is this because of it's existance on a server. Or is it something else?

Is there a way arround this and why is it required in the example?

How about some beginner books?



LVL 17

Accepted Solution

rstaveley earned 50 total points
ID: 10725037
If you're not too comfortable in Perl, you can call C/C++ programs from your CGI script to do the parsing.

Use WWW::Search to get the URLs. Use wget/lynx -source to fetch the HTML (or LWP::UserAgent if you want to spread your wings in Perl). Then parse the HTML file with your C++ program for more URLs, and use wget/lynx -source to fetch their HTML... <etc.>

You'll probably need to pass a nesting indicator so you don't recurse ad infinitum. You'll need to pass the URL path, so that relative URLs can be resolved. You should look for "href=" (or "HREF="). I doubt if processing "action=" would be too fruitful. Processing URLs in JavaScript would be hard work.

Here's a quick'n'dirty stab at the C++ code. It simply lists the URLs found in HREF attributes in an HTML file. It would be easy to add SRC= to this, if you reckon that would be valuable. The list is written to standard output, which should be something Perl groks. This would probably be done more easily in Perl, but like you, my Perl is weak.
#include <iostream>
#include <fstream>
#include <string>
#include <vector>

using std::string;

int main(int argc,const char *argv[])
      // Usage
      if (argc != 3) {
            std::cerr << "Usage: " << *argv << " {filename} {url}\n";

      // Open the HTML file
      std::ifstream fin(*++argv);      // Input file
      if (!fin) {
            std::cerr << "Error: Unable to open " << *argv << '\n';

      // Read off the full URL
      string url(*++argv);      // Full URL

      // Use path for relative URLs
      string path = url;      // Get the path from which URLs are relative to
      string::size_type pos;
      if ((pos = path.find('?')) != string::npos)      // Lose the query string
      if ((pos = path.rfind('/')) != string::npos && pos > 7)      // Keep the "http://", but lose the filename

      // Get the root for URLs, which start with '/'
      string root = path;
      if ((pos = root.find('/',7)) != string::npos)      // Keep the "http://", but lose the filename

      path += '/';            // Add a '/' separator to the path for relative URLs

      // Looking for these attributes
      typedef std::vector<string> SVector;
      SVector attributeList;

      // Process the file
      string line;
      while (getline(fin,line))
            // Process each of the sought attributes
            for (int i = 0;i < attributeList.size();++i) {
                  const string& attr = attributeList[i];
                  for (string::size_type pos = 0;(pos = line.find(attr,pos)) != string::npos;++pos) {
                        const string remains = line.substr(pos+attr.size());
                        if (!remains.size())
                        string url;
                        if (remains[0] == '\"') {
                              string::size_type pos = remains.find('\"',1);
                              if (pos != string::npos && pos > 0)
                              url = remains.substr(1,pos);
                              url = remains.substr(0,remains.find_first_of(" \t"));
                        if (!url.size())
                        if (url.find("://") != string::npos)
                              std::cout << url << '\n';
                        else if (url[0] != '/')
                              std::cout << path << url << '\n';
                              std::cout << root << url << '\n';

Author Comment

ID: 10725934

Thanks for taking the time to write/post the parsing code above. I may end up using it, I dont know.

For now I am simply trying to decide how I should design this thing. Before I end up spending serious time trying to re-invent the wheel.

If Perl is what I should learn , then Perl I will learn.

I am just unfamiliar with how the arrangement should be.

My understanding is...

SCENARIO #1 perl script or cgi script or php??
My application.
The user selects a file type and a subject matter.
Button is pressed.

My application makes a request to a Perl/cgi/php.. script that resides on a specific web server.

How, I am not exactly sure. How do I activate a server side script from within my client side application?

Next the script querries a search engine or group of search engines. And perhaps the results is returned in the form of a html page which is later parsed so something like WGet can download files from given addresses.
Or maybe some stuff from ftp. I might like to get away from WGet.exe as I dont really like to shell an exe as opposed to using a dll.


My application.
The user selects a file type and a subject matter.
Button is pressed.

My application uses something like Wininit.dll to get the search engine pages. Pages are parsed and 2 list are built. One list contains web page links which will be recursivley parsed. The other actual file locations.
Maybe use WGet to download files.


Both scenarios leave me confused.

On one hand I have a server side script that produces an output file. I guess it would not matter if that file had the same name for each user that used the script as the result would be over-written (assuming that the result file reside with the server). I take the downloading of the result to be a copy. I dont really percieve this as a problem as my software is not that popular yet, but I do have problems with continually adding band width for more and more users. It could become a problem of server slowness.

Both scenarios have the same problem of a changing search engine format but the script could be changed in one place and all is corrected.
So that is a plus for the server script.

On the other hand I really dont know how often the search engines change format or if it even really matters. Because if they keep the variable names the same. What is the diff? Maybe even less popular search engines who are not worried about being sucked up for process time from scraper type programs like what I am trying to create wont change thier format.

Scenario 2 has the advantage of not relying on a server script. Which could prove to be more cost effective in the long run.

But I gotta tell ya. I like the perl script by stefan73.

Hey stefan73 am I making any sense?
have you done this before?


LVL 30

Expert Comment

ID: 10729936
>>So what happened to the code? Do you still have users that use it?
>>Would you mind sending an example of it? Or is that asking a bit too much?

Sorry, but I lost the code and the program when my computer crashed a couple of years ago.
It was something I was playing around with, and I lost interest in it, so I didn't pull it out of my tape-backup when I recovered my computer.

Author Comment

ID: 10730656
Thanks anyway Axter.

You know you always seem to be a few steps (years) ahead of me. (you been there done that).

Out of curiosity what ideals are you kicking arround these days. (Maybe I will re-adjust my scope. I am tired of being too far behind the times. Seems like the time I concieve an ideal and finally dump it into the market I am already way behind.).

I am not wanting to steal any ideals. Just love to program and hoping to develop something more substantial / profitable.

LVL 17

Expert Comment

ID: 10730756
> SCENARIO #1 perl script or cgi script or php??

CGI scripts tend to be written in Perl. Stefan's WWW::Search is too good a fit for you not to use and it should be easy to adapt it into a CGI. You could perhaps have it return XML which you could parse on your Windows application using MSXML. [If you've not already done this sort of thing, you'll be pleasantly surprised by MSXML.]

So your Windows application issues a request via MSXML to your CGI script as follows:

The CGI script works on your (say) Linux server with WWW::Search, wget and your C++ parser executable to return:

  <?xml version="1.0" ?>
    <result url="" />
    <result url="" />

Your windows application then uses MSXML's DOM parser to do pretty things with the URLs.

That's how I'd do it.
LVL 17

Expert Comment

ID: 10730789
>  hoping to develop something more .... profitable.

I reckon I earn my living at the trailing edge of technology. It is interesting and profitable... and much better documented than the leading edge.

Author Comment

ID: 10730790
Thanks rstaveley.

I am currently shopping arround for a good beginning Perl book. I am glad to hear that I don't have to re-write my whole application. I have read a little on MSXML. And have somewhat of ideals.

Apreciated. Definitley have to save these post on my pc.


Author Comment

ID: 10730801
Rstavley. Now you got me curious. What do you do fix up legacy code for some shop? What kind of product is it? (What market?).

I use to work in prison inmate accounting software. It was good. But long story short, they sold out.

LVL 17

Expert Comment

ID: 10730927
I write applications for broadcast television. It is a mixed bag of technologies, but none of them could claim to be leading edge - unless you were a salesman ;-)

Featured Post

3 Use Cases for Connected Systems

Our Dev teams are like yours. They’re continually cranking out code for new features/bugs fixes, testing, deploying, testing some more, responding to production monitoring events and more. It’s complex. So, we thought you’d like to see what’s working for us.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Suggested Solutions

Title # Comments Views Activity
PDF library for Delphi 2 121
Grammars for C C++ and java 1 122
How to gracefully close the c++ 11 thread? 3 94
designing in object programming 12 76
When writing generic code, using template meta-programming techniques, it is sometimes useful to know if a type is convertible to another type. A good example of when this might be is if you are writing diagnostic instrumentation for code to generat…
Article by: SunnyDark
This article's goal is to present you with an easy to use XML wrapper for C++ and also present some interesting techniques that you might use with MS C++. The reason I built this class is to ease the pain of using XML files with C++, since there is…
The goal of the video will be to teach the user the concept of local variables and scope. An example of a locally defined variable will be given as well as an explanation of what scope is in C++. The local variable and concept of scope will be relat…
The viewer will learn how to user default arguments when defining functions. This method of defining functions will be contrasted with the non-default-argument of defining functions.

770 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question