0

我正在尝试使用 Web 解析器制作一个 Web 解析器,因为从本质上讲,当程序从我使它成为多线程的文档中检索文档时,会出现停机时间。我的想法是我的线程从 url 堆中检索 URL。当我在具有中等实例的 EMR 上运行程序时,这将程序的速度提高了三倍。在大型实例上,我出现内存不足错误。我只是需要更少的线程,还是线程的数量没有我认为的那么严格控制?这是我的映射器:

public static class Map extends MapReduceBase implements Mapper<LongWritable, Text, Text, Text> {
        private Text word = new Text();
        private URLPile pile= new URLPile();

        @Override
        public void map(LongWritable key, Text value, OutputCollector<Text, Text> output, Reporter reporter)  {

            // non english encoding list, all others are considered english to
            // avoid missing any
            String url = value.toString();
            StringTokenizer urls = new StringTokenizer(url);
            Config.LoggerProvider = LoggerProvider.DISABLED;
            MyThread[] Threads = new MyThread[8];
            for(MyThread thread : Threads){
                thread = new MyThread(output,pile);
                thread.start();
            }

                while (urls.hasMoreTokens()) {
                    try{

                        if(urls.hasMoreTokens()){
                            word.set(urls.nextToken());
                            String currenturl= word.toString();   
                             pile.addUrl(currenturl);
                        }else{
                            System.out.println("out of tokens");
                            pile.waitTillDone();
                        }


                    } catch (Exception e) {
                        /*

                         */
                        continue;
                    }


                }


        }

}
4

0 回答 0