Sign Up

Sign Up to our social questions and Answers Engine to ask questions, answer people’s questions, and connect with other people.

Have an account? Sign In

Have an account? Sign In Now

Sign In

Login to our social questions & Answers Engine to ask questions answer people’s questions & connect with other people.

Sign Up Here

Forgot Password?

Don't have account, Sign Up Here

Forgot Password

Lost your password? Please enter your email address. You will receive a link and will create a new password via email.

Have an account? Sign In Now

You must login to ask a question.

Forgot Password?

Need An Account, Sign Up Here

Please briefly explain why you feel this question should be reported.

Please briefly explain why you feel this answer should be reported.

Please briefly explain why you feel this user should be reported.

Sign InSign Up

The Archive Base

The Archive Base Logo The Archive Base Logo

The Archive Base Navigation

  • Home
  • SEARCH
  • About Us
  • Blog
  • Contact Us
Search
Ask A Question

Mobile menu

Close
Ask a Question
  • Home
  • Add group
  • Groups page
  • Feed
  • User Profile
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Buy Points
  • Users
  • Help
  • Buy Theme
  • SEARCH
Home/ Questions/Q 100025
In Process

The Archive Base Latest Questions

Editorial Team
  • 0
Editorial Team
Asked: May 11, 20262026-05-11T00:30:02+00:00 2026-05-11T00:30:02+00:00

I need to crawl and store locally for future analysis the contents of a

  • 0

I need to crawl and store locally for future analysis the contents of a finite list of websites. I basically want to slurp in all pages and follow all internal links to get the entire publicly available site.

Are there existing free libraries to get me there? I’ve seen Chilkat, but it’s for pay. I’m just looking for baseline functionality here. Thoughts? Suggestions?


Exact Duplicate: Anyone know of a good python based web crawler that I could use?

  • 1 1 Answer
  • 0 Views
  • 0 Followers
  • 0
Share
  • Facebook
  • Report

Leave an answer
Cancel reply

You must login to add an answer.

Forgot Password?

Need An Account, Sign Up Here

1 Answer

  • Voted
  • Oldest
  • Recent
  • Random
  1. 2026-05-11T00:30:02+00:00Added an answer on May 11, 2026 at 12:30 am

    Use Scrapy.

    It is a twisted-based web crawler framework. Still under heavy development but it works already. Has many goodies:

    • Built-in support for parsing HTML, XML, CSV, and Javascript
    • A media pipeline for scraping items with images (or any other media) and download the image files as well
    • Support for extending Scrapy by plugging your own functionality using middlewares, extensions, and pipelines
    • Wide range of built-in middlewares and extensions for handling of compression, cache, cookies, authentication, user-agent spoofing, robots.txt handling, statistics, crawl depth restriction, etc
    • Interactive scraping shell console, very useful for developing and debugging
    • Web management console for monitoring and controlling your bot
    • Telnet console for low-level access to the Scrapy process

    Example code to extract information about all torrent files added today in the mininova torrent site, by using a XPath selector on the HTML returned:

    class Torrent(ScrapedItem):     pass  class MininovaSpider(CrawlSpider):     domain_name = 'mininova.org'     start_urls = ['http://www.mininova.org/today']     rules = [Rule(RegexLinkExtractor(allow=['/tor/\d+']), 'parse_torrent')]      def parse_torrent(self, response):         x = HtmlXPathSelector(response)         torrent = Torrent()          torrent.url = response.url         torrent.name = x.x('//h1/text()').extract()         torrent.description = x.x('//div[@id='description']').extract()         torrent.size = x.x('//div[@id='info-left']/p[2]/text()[2]').extract()         return [torrent] 
    • 0
    • Reply
    • Share
      Share
      • Share on Facebook
      • Share on Twitter
      • Share on LinkedIn
      • Share on WhatsApp
      • Report

Sidebar

Ask A Question

Stats

  • Questions 74k
  • Answers 74k
  • Best Answers 0
  • User 1
  • Popular
  • Answers
  • Editorial Team

    How to approach applying for a job at a company ...

    • 7 Answers
  • Editorial Team

    How to handle personal stress caused by utterly incompetent and ...

    • 5 Answers
  • Editorial Team

    What is a programmer’s life like?

    • 5 Answers
  • added an answer So basically, you want to use Sphinx as a library?… May 11, 2026 at 2:19 pm
  • added an answer Security best practices start with locking EVERYTHING down, then only… May 11, 2026 at 2:19 pm
  • added an answer So we went with Plan C. I never did figure… May 11, 2026 at 2:19 pm

Related Questions

I've just inherited a website (ASP.Net 2.0) written by someone else that I need
I need to find a way to crawl one of our company's web applications
I've got a bunch of web page content in my database with links like
Does anyone have a suggestion for where to find archives or collections of everyday

Trending Tags

analytics british company computer developers django employee employer english facebook french google interview javascript language life php programmer programs salary

Top Members

Explore

  • Home
  • Add group
  • Groups page
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Users
  • Help
  • SEARCH

Footer

© 2021 The Archive Base. All Rights Reserved
With Love by The Archive Base

Insert/edit link

Enter the destination URL

Or link to existing content

    No search term specified. Showing recent items. Search or use up and down arrow keys to select an item.