关于Scrapy 多爬虫进入不了pipeline的问题

在一个爬虫项目中spiders下我有2个爬虫文件,name为name1,name2
在以scrapy crwal name1 进行采集时可以进入pipeline,
但是以scrapy crwal name2 进行采集时进入不了pipeline
排查了很久不知道为什么原因?有办法解决吗?
当然我的项目名是generic 配置文件如下:

python# -*- coding: utf-8 -*-

# Scrapy settings for generic project
#
# For simplicity, this file contains only the most important settings by
# default. All the other settings are documented here:
#
#     http://doc.scrapy.org/en/latest/topics/settings.html
#

BOT_NAME = 'generic'

SPIDER_MODULES = ['generic.spiders']
NEWSPIDER_MODULE = 'generic.spiders'

# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'generic (+http://www.yourdomain.com)'

#dns cach
#EXTENSIONS={"scrapy.contrib.resolver.CachingResolver": 0,}
CONCURRENT_REQUESTS = 100
COOKIES_ENABLED = False
LOG_LEVEL = 'INFO'


ITEM_PIPELINES = ['generic.pipelines.MongoStorePipeline.Pipeline': 1]
'''
[
'generic.pipelines.ImagePipeline.LogoImage',
'generic.pipelines.MongoStorePipeline.Pipeline'
]
'''

ITEM_PIPELINES = {'scrapy.contrib.pipeline.images.ImagesPipeline': 2}
IMAGES_STORE = './companylog'


LOG_FILE = "logs/scrapy.20150428.log"


阅读 8.7k
2 个回答

在iOS App中引用并访问Https

新手上路,请多包涵

最后只配置了一个pipeline,再pipeline判断好了

撰写回答
你尚未登录,登录后可以
  • 和开发者交流问题的细节
  • 关注并接收问题和回答的更新提醒
  • 参与内容的编辑和改进,让解决方法与时俱进
推荐问题