gobicse
asked on
junit test case
hi there
i have to write test cases for my web crawler application. i have so many methods in it. can any one help me how to write test case for the following method... i can do the remaining methods by understanding how you have written... i go through lots of links.. but all of them are very simple test cases... can anyone please help me...
i have to write test cases for my web crawler application. i have so many methods in it. can any one help me how to write test case for the following method... i can do the remaining methods by understanding how you have written... i go through lots of links.. but all of them are very simple test cases... can anyone please help me...
public static void crawl(String startUrl, int maxUrls, boolean limitHost,String searchString, boolean caseSensitive,int flag)
{
System.out.println("Enterig crawl method.....");
// Setup crawl lists.
HashSet crawledList = new HashSet();
LinkedHashSet toCrawlList = new LinkedHashSet();
System.out.println("Enterig crawl method"+startUrl);
// Add start URL to the to crawl list.
toCrawlList.add(startUrl);
// Perform actual crawling by looping through the to crawl list.
System.out.println("Enterig crawling looppppppppp.....");
while (crawling && toCrawlList.size() > 0)
{
// Check to see if the max URL count has been reached, if it was specified.
System.out.println("size of crawl list is ....." + toCrawlList.size());
System.out.println( "Max URLZZZZ : "+maxUrls);
if (maxUrls != -1) {
if (crawledList.size() == maxUrls) {
break;
}
}
// Get URL at bottom of the list.
String url = (String) toCrawlList.iterator().next();
System.out.println( "URL at bottom of the list is : "+url);
// Remove URL from the to crawl list.
toCrawlList.remove(url);
// Convert string url to URL object.
URL verifiedUrl = urlProcess.UrlVerification.verifyUrl(url);
// Skip URL if robots are not allowed to access it.
if (!urlProcess.RobotCheck.isRobotAllowed(verifiedUrl)) {
continue;
}
userInterface.SearchCrawler.updateStats(url, crawledList.size(), toCrawlList.size(),
maxUrls,startUrl);
// Add page to the crawled list.
crawledList.add(url);
// Download the page at the given url.
String pageContents = urlProcess.PageDownload.downloadPage(verifiedUrl,flag);
// If the page was downloaded successfully, retrieve all of its links and then see if it contains the search string.
if (pageContents != null && pageContents.length() > 0)
{
// Retrieve list of valid links from page.
ArrayList links =
crawlProcess.LinkRetrieval.retrieveLinks(verifiedUrl, pageContents, crawledList,
limitHost,flag);
// Add links to the to crawl list.
toCrawlList.addAll(links);
//Check if search string is present in page and if so record a match.
if (crawlProcess.StringMatch.searchStringMatches(pageContents, searchString,caseSensitive))
{
userInterface.SearchCrawler.addMatch(url,startUrl);
}
}
// Update crawling stats.
userInterface.SearchCrawler.updateStats(url, crawledList.size(), toCrawlList.size(),
maxUrls, startUrl);
}
ASKER
hi gurvinder...
i go through all these links.. esp that money example... the problem is i dunno how the test case should be for my application.. tats y i need your help.. kindly let me know how the test case should be for the above method.. or atleast the skeleton part for it...
i go through all these links.. esp that money example... the problem is i dunno how the test case should be for my application.. tats y i need your help.. kindly let me know how the test case should be for the above method.. or atleast the skeleton part for it...
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
http://junit.sourceforge.net/doc/cookbook/cookbook.htm
http://articles.techrepublic.com.com/5100-10878_11-1027676.html
If you are using eclipse
http://help.eclipse.org/help32/topic/org.eclipse.jdt.doc.user/gettingStarted/qs-junit.htm