Questions about designing a data mining website and crawler
Posted on 2008-10-15
I have a few questions about a project I am working on. Being fairly new to the whole idea I decided to read up and found a great deal of information. The project has been designed and I have started writing the code for it but there are some issues that keep coming up.
Firstly, the bulk of of code comes in the form of class libraries that contain AI, rule processing and inference, database access, compression, etc. The crawler is also a class library that will reference the other libraries. The crawler will most likely be initialized by a console application or winform so that it will run outside of the asp.net session (any thoughts on running it from the asp.net website?).
So the first question is:
How can I control, manage, communicate with the web crawler when its running without using remoting or tcp client/server? Would I have to use a web service?
Second question is:
Is there a better approach to this design?
As it stands now I would like to have the crawler sit waiting for jobs to come in and then store the information into the database. I do not want the website to have to reference the libraries but still be able to access the data from the crawler and manage it as well.
My main concern is that if I use the scheduler I wrote to schedule the jobs and start the crawler the crawler will close out when the session from the site has ended. I am sort of lost on how to continue with this part.
I appreciate any help I can get and if I am being too vague just let me know and I will try to explain it in more detail and/or provide code snippets. Just as a side note, I am running SQL Server 2008, Windows Server 208 (IIS 7) and .NET 3.5 (Using Visual Studio 2008 to write it)