English 中文(简体)
Designing a multi-process spider in Python
原标题:

I m working on a multi-process spider in Python. It should start scraping one page for links and work from there. Specifically, the top-level page contains a list of categories, the second-level pages events in those categories, and the final, third-level pages participants in the events. I can t predict how many categories, events or participants there ll be.

I m at a bit of a loss as to how best to design such a spider, and in particular, how to know when it s finished crawling (it s expected to keep going till it has discovered and retrieved every relevant page).

Ideally, the first scrape would be synchronous, and everything else async to maximise parallel parsing and adding to the DB, but I m stuck on how to figure out when the crawling is finished.

How would you suggest I structure the spider, in terms of parallel processes and particularly the above problem?

最佳回答

I presume you are putting items to visit in a queue, exhausting the queue with workers, and the workers find new items to visit and add them to the queue.

It s finished when all the workers are idle, and the queue of items to visit is empty.

When the workers take advantage of the queue s task_done() method, The main thread can join() the queue to block until it s empty.

问题回答

You might want to look into Scrapy, an asynchronous (based on Twisted) web-scraper. It looks like for your task, the XPath description for the spider would be pretty easy to define!

Good luck!

(If you really want to do it yourself, maybe consider having small sqlite db that keeps track of whether each page has been hit or not... or if it s reasonable size, just do it in memory... Twisted in general might be your friend for hit.)





相关问题
Can Django models use MySQL functions?

Is there a way to force Django models to pass a field to a MySQL function every time the model data is read or loaded? To clarify what I mean in SQL, I want the Django model to produce something like ...

An enterprise scheduler for python (like quartz)

I am looking for an enterprise tasks scheduler for python, like quartz is for Java. Requirements: Persistent: if the process restarts or the machine restarts, then all the jobs must stay there and ...

How to remove unique, then duplicate dictionaries in a list?

Given the following list that contains some duplicate and some unique dictionaries, what is the best method to remove unique dictionaries first, then reduce the duplicate dictionaries to single ...

What is suggested seed value to use with random.seed()?

Simple enough question: I m using python random module to generate random integers. I want to know what is the suggested value to use with the random.seed() function? Currently I am letting this ...

How can I make the PyDev editor selectively ignore errors?

I m using PyDev under Eclipse to write some Jython code. I ve got numerous instances where I need to do something like this: import com.work.project.component.client.Interface.ISubInterface as ...

How do I profile `paster serve` s startup time?

Python s paster serve app.ini is taking longer than I would like to be ready for the first request. I know how to profile requests with middleware, but how do I profile the initialization time? I ...

Pragmatically adding give-aways/freebies to an online store

Our business currently has an online store and recently we ve been offering free specials to our customers. Right now, we simply display the special and give the buyer a notice stating we will add the ...

Converting Dictionary to List? [duplicate]

I m trying to convert a Python dictionary into a Python list, in order to perform some calculations. #My dictionary dict = {} dict[ Capital ]="London" dict[ Food ]="Fish&Chips" dict[ 2012 ]="...

热门标签