0

我不知道为什么昨晚工作正常,今天早上我得到了

致命错误:第 121 行 /home/twitcast/public_html/system/index.php 中的内存不足(已分配 1611137024)(试图分配 1610350592 字节)

正在运行的代码部分如下

function podcast()
  {
            $fetch = new server();
            $fetch->connect("TCaster");
            $collection = $fetch->db->shows;

            // find everything in the collection
            $cursor = $collection->find();

            if($cursor->count() > 0)
            {
                $test = array();
                // iterate through the results
                while( $cursor->hasNext() ) {   
                    $test[] = ($cursor->getNext());
                }
                $i = 0;
                foreach($test as $d) {

                for ( $i = 0; $i <= 3; $i ++) {
                $url = $d["streams"][$i];   
                $xml = file_get_contents( $url );
                $doc = new DOMDocument();
                $doc->preserveWhiteSpace = false;
                $doc->loadXML( $xml); // $xml = file_get_contents( "http://www.c3carlingford.org.au/podcast/C3CiTunesFeed.xml")

                // Initialize XPath    
                $xpath = new DOMXpath( $doc);
                // Register the itunes namespace
                $xpath->registerNamespace( 'itunes', 'http://www.itunes.com/dtds/podcast-1.0.dtd');

                $items = $doc->getElementsByTagName('item');    
                    foreach( $items as $item) {
                        $title = $xpath->query( 'title', $item)->item(0)->nodeValue;
                        $published = strtotime($xpath->query( 'pubDate', $item)->item(0)->nodeValue);
                        $author = $xpath->query( 'itunes:author', $item)->item(0)->nodeValue;
                        $summary = $xpath->query( 'itunes:summary', $item)->item(0)->nodeValue;
                        $enclosure = $xpath->query( 'enclosure', $item)->item(0);
                        $url = $enclosure->attributes->getNamedItem('url')->value;

                    $fname = basename($url);
                    $collection = $fetch->db->shows_episodes;

                    $cursorfind = $collection->find(array("internal_url"=>"http://twitcatcher.russellharrower.com/videos/$fname"));
                    if($cursorfind->count() < 1)
                    {


                        $copydir = "/home/twt/public_html/videos/";
                        $data = file_get_contents($url);
                        $file = fopen($copydir . $fname, "w+");

                        fputs($file, $data);

                        fclose($file);
                        $collection->insert(array("show_id"=> new MongoId($d["_id"]),"stream"=>$i,"episode_title"=>$title, "episode_summary"=>$summary,"published"=>$published,"internal_url"=>"http://twitcatcher.russellharrower.com/videos/$fname"));

                        echo "$title <br> $published <br> $summary <br> $url<br><br>\n\n";
                    }




                }

            }
            }
            }

第 121 行是

$data = file_get_contents($url);
4

2 回答 2

0

内存限制- 看看这个指令。假设这就是你所需要的。

或者您可以尝试使用copy而不是将文件读取到内存(据我所知,这是视频文件,因此占用大量内存并不奇怪):

$copydir = "/home/twt/public_html/videos/";
copy($url, $copydir . $fname);

看起来昨晚打开的文件更小)

于 2012-08-27T00:50:56.570 回答
0

您想为单个 PHP 线程增加 1.6GB 的内存使用量吗?虽然您可以增加内存限制,但我强烈建议您考虑另一种方式来做您想做的事。

Probably the easiest solution: you can use CURL to request a byte range of the source file (using Curl is wiser than get_file_contents anyway, for remote files). You can get 100K ata time, write to the local file then got the next 100k and appeand to the file etc, until the entire file is pulled in.

You may also do something with streams, but it gets a little more complex. This may be your only option if the remote server won't let you get part of a file by bytes.

Finally there's Linux commands such as wget, run through exec(), if your server has permissions.

于 2012-08-27T01:44:58.417 回答