Hi you may consider to parse the html content in that page it doesn't really matter how is the structure of it you just need to think to grab the links tags so the first you need do is
1- use an html parser I recommend Html Agility Pack is a very mature html parser and it got a lot of features like linq to xml among others.
2- Parse the text using regular expressions in that way you'll be able to parse whatever html tag you want without involve too much code for it
3- you need to think the depth of your links you want to crawl imagine the following scenario:
www.mywebsite.com/tab3 could contain www.mywebsite.com/tab3/link2 and www.mywebsite.com/tab3/link3 and so on so putting a limit is very important
4- you can create your own windows service and use web request to do the crawling or try to use a crawler from a third party, that depends on the purpose of what you wat to do I haven't use this but it seems ok to me, maybe it worth to take a look.
Abot C# Web Crawler
Edit:
if the page is in blank you can crawl google with site:your domain.com as your primary pag and then extract the links from the actual domain instead of google or try to crawl the robots.txt from the site.