English 中文(简体)
Twisted errors in Scrapy spider
原标题:

When I run the spider from the Scrapy tutorial I get these error messages:

File "C:Python26libsite-packages	wistedinternetase.py", line 374, in fireEvent DeferredList(beforeResults).addCallback(self._continueFiring)  

File "C:Python26libsite-packages	wistedinternetdefer.py", line 195, in addCallback callbackKeywords=kw)

File "C:Python26libsite-packages	wistedinternetdefer.py", line 186, in addCallbacks self._runCallbacks()

File "C:Python26libsite-packages	wistedinternetdefer.py", line 328, in_runCallbacks self.result = callback(self.result, *args, **kw)

--- <exception caught here> ---

File "C:Python26libsite-packages	wistedinternetase.py", line 387, in _continueFiring callable(*args, **kwargs)

File "C:Python26libsite-packages	wistedinternetposixbase.py", line 356, in listenTCP p.startListening()

File "C:Python26libsite-packages	wistedinternet	cp.py", line 858, in startListening raise CannotListenError, (self.interface, self.port, le) twisted.internet.error.CannotListenError: Couldn t listen on any:6023: [Errno 10048] 

Only one usage of each socket address (protocol/network address/port) is normally permitted.

Does anyone know what they are and how I can get rid of them?

Thanks

最佳回答

Perhaps you re running two Scrapy process simultaneously with telnet console enabled?.

If you want to run more than one Scrapy process at the same time, you must disable (or, at least, change the port) of web and telnet consoles.

问题回答

暂无回答




相关问题
Scrapy SgmlLinkExtractor question

I am trying to make the SgmlLinkExtractor to work. This is the signature: SgmlLinkExtractor(allow=(), deny=(), allow_domains=(), deny_domains=(), restrict_xpaths(), tags=( a , area ), attrs=( href )...

Scrapy BaseSpider: How does it work?

This is the BaseSpider example from the Scrapy tutorial: from scrapy.spider import BaseSpider from scrapy.selector import HtmlXPathSelector from dmoz.items import DmozItem class DmozSpider(...

Designing a multi-process spider in Python

I m working on a multi-process spider in Python. It should start scraping one page for links and work from there. Specifically, the top-level page contains a list of categories, the second-level pages ...

What is the best way to crawl a login based sites?

I ve to automate a file download activity from a website (similar to, let s say, yahoomail.com). To reach a page which has this file download link, i ve to login, jump from page to page to provide ...

Twisted errors in Scrapy spider

When I run the spider from the Scrapy tutorial I get these error messages: File "C:Python26libsite-packages wistedinternetase.py", line 374, in fireEvent DeferredList(beforeResults)....

Crawling not working windows2008

We installed a new MOSS 2007 farm on windows 2008 SP2 enviroment. We used SQL2008 too. Configuration is 1 index, 1 FE and 1 server with 2008, all on ESX 4.0. All the Service that need it uses a ...

Is there a list of known web crawlers? [closed]

I m trying to get accurate download numbers for some files on a web server. I look at the user agents and some are clearly bots or web crawlers, but many for many I m not sure, they may or may not be ...

Most optimized way to store crawler states?

I m currently writing a web crawler (using the python framework scrapy). Recently I had to implement a pause/resume system. The solution I implemented is of the simplest kind and, basically, stores ...

热门标签