Sign Up

Sign Up to our social questions and Answers Engine to ask questions, answer people’s questions, and connect with other people.

Have an account? Sign In

Have an account? Sign In Now

Sign In

Login to our social questions & Answers Engine to ask questions answer people’s questions & connect with other people.

Sign Up Here

Forgot Password?

Don't have account, Sign Up Here

Forgot Password

Lost your password? Please enter your email address. You will receive a link and will create a new password via email.

Have an account? Sign In Now

You must login to ask a question.

Forgot Password?

Need An Account, Sign Up Here

Please briefly explain why you feel this question should be reported.

Please briefly explain why you feel this answer should be reported.

Please briefly explain why you feel this user should be reported.

Sign InSign Up

The Archive Base

The Archive Base Logo The Archive Base Logo

The Archive Base Navigation

  • Home
  • SEARCH
  • About Us
  • Blog
  • Contact Us
Search
Ask A Question

Mobile menu

Close
Ask a Question
  • Home
  • Add group
  • Groups page
  • Feed
  • User Profile
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Buy Points
  • Users
  • Help
  • Buy Theme
  • SEARCH
Home/ Questions/Q 649247
In Process

The Archive Base Latest Questions

Editorial Team
  • 0
Editorial Team
Asked: May 13, 20262026-05-13T21:55:23+00:00 2026-05-13T21:55:23+00:00

I need to create a user configurable web spider/crawler, and I’m thinking about using

  • 0

I need to create a user configurable web spider/crawler, and I’m thinking about using Scrapy. But, I can’t hard-code the domains and allowed URL regex:es — this will instead be configurable in a GUI.

How do I (as simple as possible) create a spider or a set of spiders with Scrapy where the domains and allowed URL regex:es are dynamically configurable? E.g. I write the configuration to a file, and the spider reads it somehow.

  • 1 1 Answer
  • 0 Views
  • 0 Followers
  • 0
Share
  • Facebook
  • Report

Leave an answer
Cancel reply

You must login to add an answer.

Forgot Password?

Need An Account, Sign Up Here

1 Answer

  • Voted
  • Oldest
  • Recent
  • Random
  1. Editorial Team
    Editorial Team
    2026-05-13T21:55:23+00:00Added an answer on May 13, 2026 at 9:55 pm

    WARNING: This answer was for Scrapy v0.7, spider manager api changed a lot since then.

    Override default SpiderManager class, load your custom rules from a database or somewhere else and instanciate a custom spider with your own rules/regexes and domain_name

    in mybot/settings.py:

    SPIDER_MANAGER_CLASS = 'mybot.spidermanager.MySpiderManager'
    

    in mybot/spidermanager.py:

    from mybot.spider import MyParametrizedSpider
    
    class MySpiderManager(object):
        loaded = True
    
        def fromdomain(self, name):
            start_urls, extra_domain_names, regexes = self._get_spider_info(name)
            return MyParametrizedSpider(name, start_urls, extra_domain_names, regexes)
    
        def close_spider(self, spider):
            # Put here code you want to run before spiders is closed
            pass
    
        def _get_spider_info(self, name):
            # query your backend (maybe a sqldb) using `name` as primary key, 
            # and return start_urls, extra_domains and regexes
            ...
            return (start_urls, extra_domains, regexes)
    

    and now your custom spider class, in mybot/spider.py:

    from scrapy.spider import BaseSpider
    
    class MyParametrizedSpider(BaseSpider):
    
        def __init__(self, name, start_urls, extra_domain_names, regexes):
            self.domain_name = name
            self.start_urls = start_urls
            self.extra_domain_names = extra_domain_names
            self.regexes = regexes
    
         def parse(self, response):
             ...
    

    Notes:

    • You can extend CrawlSpider too if you want to take advantage of its Rules system
    • To run a spider use: ./scrapy-ctl.py crawl <name>, where name is passed to SpiderManager.fromdomain and is the key to retreive more spider info from the backend system
    • As solution overrides default SpiderManager, coding a classic spider (a python module per SPIDER) doesn’t works, but, I think this is not an issue for you. More info on default spiders manager TwistedPluginSpiderManager
    • 0
    • Reply
    • Share
      Share
      • Share on Facebook
      • Share on Twitter
      • Share on LinkedIn
      • Share on WhatsApp
      • Report

Sidebar

Ask A Question

Stats

  • Questions 314k
  • Answers 314k
  • Best Answers 0
  • User 1
  • Popular
  • Answers
  • Editorial Team

    How to approach applying for a job at a company ...

    • 7 Answers
  • Editorial Team

    What is a programmer’s life like?

    • 5 Answers
  • Editorial Team

    How to handle personal stress caused by utterly incompetent and ...

    • 5 Answers
  • Editorial Team
    Editorial Team added an answer Finally , I used TreeSet with case insensitive comparator. Example… May 13, 2026 at 10:57 pm
  • Editorial Team
    Editorial Team added an answer This can easily be done in Excel by creating a… May 13, 2026 at 10:57 pm
  • Editorial Team
    Editorial Team added an answer Your problem is UAC, even though you aren't getting a… May 13, 2026 at 10:57 pm

Related Questions

I need to launch a program that will at a users pre defined interval,
I have a requirement whereby a user can specify a variable number of user-defined
I need to create a user control, that will be used in an application
I need to create a new user in Active Directory. I have found several
I need to create a custom membership user and provider for an ASP.NET mvc

Trending Tags

analytics british company computer developers django employee employer english facebook french google interview javascript language life php programmer programs salary

Top Members

Explore

  • Home
  • Add group
  • Groups page
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Users
  • Help
  • SEARCH

Footer

© 2021 The Archive Base. All Rights Reserved
With Love by The Archive Base

Insert/edit link

Enter the destination URL

Or link to existing content

    No search term specified. Showing recent items. Search or use up and down arrow keys to select an item.