Link to home
Start Free TrialLog in
Avatar of mmalik15

asked on

how to extract all the hyperlinks on this webpage

on this web page if we click on expand all in the list of browse topics. How can we extract all the hyperlinks of the with titles like adoption, access to birth records etc
Avatar of kaufmed
Flag of United States of America image

Link to home
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of mmalik15


Many thanks again kaufmed..

how can i exclude rss link in the xpath? Apart from that its working fine.

Also could you kindly tell me any xpath tool to extract the information from html DOM or what's the best approach to write xpath for html dom?
Oh, sorry. I meant to exclude that as well:

doc.DocumentNode.SelectNodes("//span[@class='branch']//a[not(starts-with(@href, 'javascript:')) and not(starts-with(@href, 'rss/'))]")

Open in new window

Brilliant kaufmed. Its working perfectly.

I use Altova to test any xpath on xml documents but wonder if  there is a similar tool to test Html DOM.
I don't know of any. HTML is becoming more in line with XML with new standards that are released. Most of the frameworks people use today to build HTML do so such that the HTML is well-formed (similar to XML). As such, you should be able to use Altova on any well-formed HTML since HTML is (technically) a subset of XML (even though HTML was around first). Unless you are dealing with someone who hand-code their web page, you should be OK using Altova.

One of the reasons HTML Agility Pack is so popular is that the team sought to make a library that could handle (as best as one can) mal-formed HTML. HAP takes some liberties in making the source HTML well-formed so that you can use XPath against the loaded document.
Thanks kaufmed... Its worth having EE membership because of the presence of people like you!