Sign Up

Sign Up to our social questions and Answers Engine to ask questions, answer people’s questions, and connect with other people.

Have an account? Sign In

Have an account? Sign In Now

Sign In

Login to our social questions & Answers Engine to ask questions answer people’s questions & connect with other people.

Sign Up Here

Forgot Password?

Don't have account, Sign Up Here

Forgot Password

Lost your password? Please enter your email address. You will receive a link and will create a new password via email.

Have an account? Sign In Now

You must login to ask a question.

Forgot Password?

Need An Account, Sign Up Here

Please briefly explain why you feel this question should be reported.

Please briefly explain why you feel this answer should be reported.

Please briefly explain why you feel this user should be reported.

Sign InSign Up

The Archive Base

The Archive Base Logo The Archive Base Logo

The Archive Base Navigation

  • SEARCH
  • Home
  • About Us
  • Blog
  • Contact Us
Search
Ask A Question

Mobile menu

Close
Ask a Question
  • Home
  • Add group
  • Groups page
  • Feed
  • User Profile
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Buy Points
  • Users
  • Help
  • Buy Theme
  • SEARCH
Home/ Questions/Q 570209
In Process

The Archive Base Latest Questions

Editorial Team
  • 0
Editorial Team
Asked: May 13, 20262026-05-13T13:22:37+00:00 2026-05-13T13:22:37+00:00

I am developing an application that is, to put it simply, a niche based

  • 0

I am developing an application that is, to put it simply, a niche based search engine. Within the application I have include a function crawl() which crawls a website and then uses the collectData() function to store the correct data from the site in the “products” table as described in the function. The visited pages are stored in a database

The crawler works pretty well, just as described except for two things: Timeout and Memory. I’ve managed to correct the timeout error but the memory remains. I know simply increasing the memory_limit is not actually fixing the problem.

The function is run by visiting “EXAMPLE.COM/products/crawl”.

Is a memory leak inevitable with a PHP Web crawler? OR is there something I’m doing wrong/not doing.

Thanks in advance. (CODE BELOW)

function crawl() {

        $this->_crawl('http://www.example.com/','http://www.example.com');  
    }

    /***
    *
    * This function finds all link in $start and collects 
    * data from them as well as recursively crawling them
    *
    * @ param $start, the webpage where the crawler starts
    *
    * @ param $domain, the domain in which to stay
    *
    ***/

    function _crawl($start, $domain) {
        $dom = new DOMDocument();
        @$dom->loadHTMLFile($start);

        $xpath = new DOMXPath($dom);
        $hrefs = $xpath->evaluate("/html/body//a");//get all <a> elements

        for ($i = 0; $i < $hrefs->length; $i++) {

            $href = $hrefs->item($i);
            $url = $href->getAttribute('href'); // get href value
            if(!(strpos($url, 'http') !== false)) {  //check for relative links
                $url = $domain . '/' . $url;
            }

            if($this->Page->find('count', array('conditions' => array('Page.url' => $url))) < 1 && (strpos($url, $domain) !== false)) { // if this link has not already been crawled ( exists in database)

                $this->Page->create();
                $this->Page->set('url',$url);
                $this->Page->set('indexed',date('Y-m-d H:i:s'));
                $this->Page->save(); // add this url to database

                $this->_collectData($url); //collect this links data
                $this->_crawl($url, $domain); //crawl this link
            }
        }
    }
  • 1 1 Answer
  • 0 Views
  • 0 Followers
  • 0
Share
  • Facebook
  • Report

Leave an answer
Cancel reply

You must login to add an answer.

Forgot Password?

Need An Account, Sign Up Here

1 Answer

  • Voted
  • Oldest
  • Recent
  • Random
  1. Editorial Team
    Editorial Team
    2026-05-13T13:22:37+00:00Added an answer on May 13, 2026 at 1:22 pm

    You’re creating upwards of twice as many database queries as there are links on the page, I’d say that’s where your problem is. Try to just accumulate the links into an array, do one big batch-query to filter out the duplicates and insert new records with a saveAll().


    Actually, looking at it again, you’re recursively crawling all links as well, but without any depth limit or abort condition. In other words, the script will continue as long as there are links to follow, which is potentially infinite. You should just process one page at a time and crawl further links in another instance, for example using a queue/worker pattern.

    • 0
    • Reply
    • Share
      Share
      • Share on Facebook
      • Share on Twitter
      • Share on LinkedIn
      • Share on WhatsApp
      • Report

Sidebar

Ask A Question

Stats

  • Questions 366k
  • Answers 366k
  • Best Answers 0
  • User 1
  • Popular
  • Answers
  • Editorial Team

    How to approach applying for a job at a company ...

    • 7 Answers
  • Editorial Team

    How to handle personal stress caused by utterly incompetent and ...

    • 5 Answers
  • Editorial Team

    What is a programmer’s life like?

    • 5 Answers
  • Editorial Team
    Editorial Team added an answer Could you be more specific? What language? Do yo want… May 14, 2026 at 4:31 pm
  • Editorial Team
    Editorial Team added an answer DEMO: http://jsbin.com/uqevo3/2/edit $("#scheduled_date_163").datepicker({ defaultDate: "31-8-2009", //OR $.datepicker.parseDate("dd mm yy", "31… May 14, 2026 at 4:31 pm
  • Editorial Team
    Editorial Team added an answer You should take the Transpose extension from here. May 14, 2026 at 4:31 pm

Trending Tags

analytics british company computer developers django employee employer english facebook french google interview javascript language life php programmer programs salary

Top Members

Explore

  • Home
  • Add group
  • Groups page
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Users
  • Help
  • SEARCH

Footer

© 2021 The Archive Base. All Rights Reserved
With Love by The Archive Base

Insert/edit link

Enter the destination URL

Or link to existing content

    No search term specified. Showing recent items. Search or use up and down arrow keys to select an item.