When crawling the world wide web, I would want to give my crawler an initial seed list of URLs - and would expect my crawler to automatically 'discover' new seed URLs from internet during it's crawling.
I see such option in Apach Nutch (see topN parameter in generate command of nutch). Is there any such option in Storm Crawler as well?